Home » Programming » C++ Raytracer » Ray Tracer Part Six – Depth Of Field

Ray Tracer Part Six – Depth Of Field

gridcoin728x90_40b-99980e

Depth Of Field

Depth of Field Camera

Adding depth of field capability to your ray tracer is well worth the effort, and you’ll be pleased t know that it is not all too difficult either. You will however need to wait significantly longer for your images to render, as every pixel now needs to send multiple rays in order reduce noise.

The illustration below gives a basic idea of how we can achieve this effect.  For simplicity lets assume 2 dimensions, and only 3 pixels need rendering.  The standard camera will cast one ray ( yellow, green, cyan) for each pixel, from the eye point through the associated view plane point.  In the case of DOF camera, multiple rays will be cast for each pixel.  This time however the view plane will be situated at the desired focal distance, and the eye point will be randomised around the eyepoint, to an extent dependent on the radius of the aperture.

DOF Camera

So implementing this in code should be fairly easy. The approach I will describe requires recalculating the same parameters as for the standard camera, but this time the view plane  is located at the focal distance.  Whenever the Focal distance is changed, these parameters will need to be recalculated.

The changes that need to be made to the standard camera code are as follows:

Calculate the focal distance, assuming the focus should be set to the lookat point.

  Focal Distance = Length(lookAtPoint – eyePoint)

If the focal distance is instead set at some other value, calculate a new point at the centre of the new view plane.

Calculate the new half width of the view plane.

  halfWidth = focalDistance *tan(fov/2)

So now the bottomLeft , and increment vectors will be calculated correctly using the same code as for the standard camera.

In order to randomise the eye point later, 2 vectors will also be precalculated, such that we can add them to the eye point.

  xApertureRadius = u.normalise()*apertureRadius
  yApertureRadius = v.normalise()*apertureRadius;

Now we need to change the code in the getRay method.

Random rays work well, so we can generate 2 random numbers,

One for the x variation around the eyePoint, and one for the y. I don’t believe the extra computation to ensure the new eye points lay inside a circular aperture make much difference, however you can decide.

  R1 = random value between -1 and 1
  R2 = random value between -1 and 1
  ViewPlanePoint = bottomLeft + x*xInc + y*yInc
  newRandomisedEyePoint = eyePoint+ R1*xApertureRadius + R2*yApertureRadius
  ray = (ViewPlanePoint - newRandomisedEyePoint).normalise()

Now everything required to generate an image with depth of field is complete.  Instead of firing just a single ray through each pixel, we now add something like the pseudo-code below.

  Color tempColor(0.0f,0.0f,0.0f);
  for(int i = 0; i < numDOFRaysPerPixel; i++){
      camera.getRay(x,y, castRay, eyePoint);  // where castRay and eyePoint are references and set by the getRay method
      tempColor.setBlack();
      traceRay(eyePoint, castRay , tempColor....);  //tempColor will be set to the resulting color for this ray
      displayBuffer.add(x, y, tempColor); // add all the DOF rays to the same screen pixel
  }
  displayBuffer.divide(x,y, numDOFRaysPerPixel);  // divide the pixel by the number of DOF rays cast

This will cast the same number of DOF rays for every pixel in the screen, which is not very efficient.  Consider these two situations.  First where there is an object situated on the focal plane (in the path of our ray), and secondly where the object is situated far behind the focal plane.  In the first case all our depth of field rays will converge to the same point, so casting a large number of rays is wasteful.  In the second case, many more rays will be required, as each ray will ‘pick up’ colours form points which are very far apart, and possibly intersecting different objects (refer to the figure above).  So instead of casting a fixed number of DOF rays per pixel an improvement can be made by continuing to cast rays until a certain error condition is met.  For example once the effect on the final colour of a pixel by additional DOF rays is less than a defined limit.



Leave a comment

Contact

Email: sjh148@uclive.ac.nz