KD-trees for efficient ray intersections
The ray tracer implemented in A4 was extremely slow at dealing with meshes. This is the trace for a scene with a macho cow.
A KD-tree data structure was used to organize the triangles in meshes. This allowed the
Mesh::intersect function to skip a significant portion of the triangles in the mesh and only test for intersections with triangles that would have a chance of being hit.
This reduced the rendering time from 43s to 6.5s.
This scene took about an hour to render using my A4 ray tracer, with this KD-tree implementation and multi-threading, it now takes 8 seconds.
Both refractive objects are simulating water with an index of refraction of 1.33. Notice how the straw bends as it enters the water, and how the orb flips the scenery behind it.
Refraction was implemented using the vector form of Snell’s law. A transmitted ray is calculated using the incident ray along with the surface normal of the object.
Sphere texture mapping is trivial, the zenith and azimuth angles are mapped to the range
[0, 1]. That coordinate is then looked up in the raster file and interpolated.
The cube gets the same texture among all faces. The cylinder and cone divide the raster file into thirds. The top third is for the top face, the middle third is for the bottom face and the final third is what gets “wrapped around” the primitive.
Bump mapping is very similar in implementation to texture mapping except that instead of changing the diffuse color, the object’s normal is perturbed according to the
RGB values specified in the raster file.
Texture and bump mapping also work with reflective materials. Here the object has a reflectivity of 0.2. We can see the shine of the light as it crosses the cube. Since the normals of the cube face have been adjusted by the bump map, we get some cool patterns on the bricks.
Constructive solid geometry
CSG was implemented by keeping a list of all intersections, and applying set operations on the ranges. Union is trivial. Subtraction works by subtracting the first range by the second. If the object was “sliced”, the intersection’s normal gets reversed and its material is set to the second object. This is how the colors of the materials below are set. Intersection is very similar to subtraction.
CSG also works for meshes.
Here, two teapots are moved into each other and are being intersected with each other.
The two existing primitives were the sphere and cube. The images below display a diffuse material on the left, and then 4 reflective materials, each with an increasing reflectivity amount, with a reflectivity amount of 1 on the very right.
The two new primitives are the cylinder and cone.
Depth of field
Depth of field was implemented by essentially giving the camera an aperture. Multiple samples are taken at varying positions and then averaged. The following images show a widening aperture, starting at 0, and then increasing to 0.05, 0.1 and 0.15.
Soft shadows were implemented similarly to depth of field. An area light was introduced that allowed each point to sample multiple points on the light. By averaging all the samples, we achieve softer shadows. The following images show a point light, followed by soft shadows with 1, 4, 9, 16, 25, 64 and 144 samples.
Here are the above images in a video format.
Anti-aliasing using supersampling (with jittering)
Anti-aliasing was implemented using the supersampling technique by increasing the number of rays casted per pixel. Below on the left is the original image with 1 ray per pixel, and on the right we have an image produced by casting 4 rays per pixel, and averaging their results. We can see that the image on the right appears smoother.
This does come with a cost, the original image took 9.377s to render while the anti-aliased image took 35.393s.
Extra: Adaptive anti-aliasing
For an extra objective, I implemented adaptive anti-aliasing. I found that having to cast an exponentially larger number of rays was too impractical to render. Adaptive anti-aliasing was implemented by iteratively figuring out which pixels are “bad” pixels, and only re-sampling those ones.
On the left we have the fully anti-aliased image with 4 rays per pixel, and on the right we have an adaptive anti-aliased image that used at most 6 samples for some pixels. The one on the left took 35.393s while the one on the right took 12.711s.
The two images do differ as highlighted by the pixels in yellow. However, on first glance they produce roughly the same image.
This debug image gives a better idea of how the adaptive anti-aliasing works. Notice how the anti-aliasing is mostly done on edges, the pixels that need it the most. The following is a legend that shows what color corresponds to how many samples that pixel had:
- Black = 1 sample
- White = 2 samples
- Yellow = 3 samples
- Blue = 4 samples
- Greed = 5 samples
- Purple = 6 samples
- Red = 7+ samples
Extra: Interpolation between vertex normals
When reading in an
.obj file, the
Mesh class can now consume vertex normals. It then uses these normals to interpolate using barycentric coordinates the normal for an arbitrary point on a triangle. This produces smoother looking meshes.
We can also see here that this works well with refractive materials.
I present to you CSG Restaurant. You have 3 choices starting with an assortment of chopped up primitives, a mug and a magic glass, and finally a cow with a chunk cut out of it and a die.
I was not able to render all my features at once, and so the following is a 256x256 image with 16 shadow samples.
Textures obtained from https://opengameart.org/content/50-free-textures-4-normalmaps Models obtained from https://www.cgtrader.com/items/635604