Four features I added into my ray tracer

In this post, I discuss four features that I implemented in my ray tracer that I believe everyone should try to incorporate into their own ray tracers.

Introduction

I've recently built a ray tracer and incorporated several interesting features that I believe will inspire people to challenge themselves by implementing these features on their own.

Features

Bounding Volume Hierarchy

The first feature I consider quintessential in almost any ray tracer is an acceleration data structure. Specifically, I implemented a Bounding Volume Hierarchy (BVH) in my ray tracer, enabling it to render complex scenes with various meshes in seconds rather than minutes.

There are two ways to construct a BVH: top-down and bottom-up. For my ray tracer, I chose to implement a bottom-up, nearest-neighbor BVH:

  1. Initially, I generated a list of axis-aligned bounding boxes around each triangle of a mesh.
  2. Next, I selected a single bounding box from the aforementioned list and proceeded to find a second bounding box in the list. I compared the distance between the two bounding boxes using their centroids (the center coordinates). I repeated this process until I found a bounding box that is most closest to the first one. The closest bounding box is then removed from the list, and both bounding boxes are merged into a single bounding box.
  3. I added the merged bounding box back to the list and repeated the process. I continued this process until only a single bounding box remained, encompassing the entire mesh. This final bounding box served as the root bounding box.

By following this approach, I noticed my render times jumping from 60 minutes long for a single mesh to 200 milliseconds with multiple meshes in a scene.


Figure 1. A scene rendered using a bounding volume hierarchy in just 212 milliseconds.

Transformations

Arguably, this next feature is as important as the first, and it involves transformations. I incorporated three fundamental transformations in my ray tracer: scaling, translating, and rotating.

Implementing these transformations isn't hard! What I did was combine all the scaling, translating, and rotating operations into a single transformation matrix. I utilized a library like GLM to handle the matrix calculations for me. By multiplying the vertices of my meshes by this transformation matrix, I was able to move, shape and rotate everything in my scenes.


Figure 2. An animated scene showing off some transformations.

Anti-aliasing

The next feature I want to mention is anti-aliasing. While it may initially seem unremarkable, it is actually quite fascinating. Anti-aliasing falls under the category of ray tracing techniques known as distributed ray tracing. In distributed ray tracing, instead of casting a single ray into the scene, we cast multiple sub-pixel rays (either randomly or by dividing each pixel into evenly sized portions). The values obtained from these rays are then averaged to achieve the final result.

To implement anti-aliasing in my ray tracer, I subdivided each pixel into uniformly distributed segments. I divided the pixel both vertically and horizontally and calculated the illumination value at each segment. By collecting a sufficient number of samples, I was able to average the illumination values for a single pixel, resulting in an anti-aliasing effect.

This effect is useful because it helps reduce visual artifacts such as jagged edges and aliasing that can occur when rendering scenes, leading to a smoother and more visually pleasing output.


Figure 3. An example scene comparing with (left) and without (right) anti-aliasing enabled.

Area Lighting

Finally, my most favorite feature is area lighting. Similar to the previously mentioned anti-aliasing technique, this feature also utilizes the distributed ray tracing approach to achieve its effect. As the name implies, the light source is not just a point but has a physical area, often in the form of a rectangle or any other shape. With this area, the effect is achieved by casting multiple shadow rays from a surface towards the area of the light source. Once a sufficient number of samples have been collected, the average value is computed and used to illuminate the respective surface.

In my ray tracer, I implemented area lighting by considering a rectangular area divided into evenly sized segments. From each surface, I cast shadow rays towards each segment of the area. By collecting a collection of samples, I calculated the average of the illumination values to shade my surfaces.

This effect adds a realistic touch to the scene by simulating the soft shadows and variations in lighting that occur in the real world.

Figure 4. An example scene comparing area lighting (left) and point lighting (right).

Conclusion

In conclusion, I hope you have found this post useful, and I hope it has inspired you to implement some of the features mentioned here. To assist you in your journey, I have provided a list of additional resources that may be helpful:

I wish you the best of luck in incorporating these features into your own ray tracer. Happy coding!