In this project, I implemented a physically-based renderer using the pathtracing algorithms discussed in class. The end product allows for the rendering of three-dimensional scenes (in this case, .dae files). This was a computationally heavy project as it needs to compute the illuminance at every ray-surface intersection. I implemented methods that generated camera rays and methods that detected ray-scene intersection. Optimization and acceleration methods were also implemented in order to speed up the ray generation and decrease the amount of computation for each pixel. Even with such methods in use, the renders took significant amounts of time. This was definitely one of the more difficult projects of the class as the methods were very interconnected causing bug fixes to be difficult.
I knew that rendering was a time-consuming process on consumer software like Autodesk Maya. However, having implemented and waited for my project to render, I have a newfound respect for the speed of the renderer in consumer software.
Raytracing within a pixel: The method is given a pixel position (x, y). Our goal is to cast rays through the given pixel and calculate the radiance generated by the rays. If only 1 sample is needed per pixel, I generate a camera ray through the middle of the pixel and calculate the pixel's radiance using the trace_ray method. If I need to sample more than once per pixel, I track the sum of all the radiances for each ray that I create. Then, I take the average and return the average radiance of the pixel. In order to generate all ns_aa rays, I create a for-loop that generates a random Vector2D between [0, 1] that is added to the input (x, y). This represents a random sample within the pixel. Next, I generate a ray through the randomly sampled point. I then call trace_ray which calculates that ray's radiance. I add this to my running radiance sum. After the for loop, I divide by the number of times I've sampled (in part 1, this will be ns_aa. The returned Spectrum encodes the average radiance and irradiance values of the pixel.
Generating a camera ray: This method is given a position within a pixel to cast a ray through. In order to calculate the ray's direction, I first create a vector that maps the input position to a point between (0,0) and (1,1). This vector represents our camera ray's direction within camera space. In order to convert the vector from camera space to world space, I multiply the vector to the 3x3 camera-to-world matrix. Next, I generate the worldspace ray where the origin is the camera's position, the direction is the previously calculated Vector3D. I then set the ray's min_t, max_t to nClip, fClip respectively. These values represent the "start" and "end" of our ray.
Triangle Intersection: The triangle intersection method is given a ray (and depending on the version of the Triangle::intersect method, an Intersection data structure. Using the ray and the 3 vertices of the triangle, the goal is to implement ray-triangle intersection detection to determine whether or not the ray has intersected with the triangle. To do this, I used the Moller Trumbore algorithm presented in lecture. Using this algorithm, I was able to calculate our time of intersection t, and the barycentric coordinates b1, b2, b3. The only time the triangle intersection is valid is when:
min_t, max_t (set in the previous method)max_t value so that less calculations occur in the future (primitives further away will not be calculated since a closer intersection exists). The Intersection data structure is also filled in. The barycentric coordinates are interpolated with the intersection's current normals: b3 * n1 + b1 * n2 + b2 * n3 . We set the primitive of the input intersection to our current triangle primitive. I also set the intersection's bsdf to the primitive's BSDF (bidirectional scattering distribution function that encodes the probablity a light ray is reflected).
Sphere Intersection: For the sphere intersection method, I used the ray-sphere intersection algorithm presented in class that uses the quadratic equation to calculate the time, t, of ray-sphere intersection. There are 3 cases for ray-sphere intersection:
t1, t2 to the maximum of th solutions and the minimum of the solutions respectively. I return true.Following images are all rendered with:
./pathtracer -t 8 -s 1024 -l 4 -m 6 -r 480 360 ../dae/sky/*.dae
Most notable mistake: I did not realize my bug until Part 3 when my renders were coming out very grainy. I debugged those sections for a very long time before I decided to check Part 1. However, even when doing so, I assumed that my method (averaging ns_aa sample points and generating a total of one ray for each method call) was equivalent to the spec's suggested method (generating a ray for each individual sample and taking the average after ns_aa rays were generated). Afterwards, my renders were much slower but very smooth and clean.
Overview: A bounding volume hierarchy is an raytracer acceleration strategy that allows us to decrease the number of ray-intersection calculations that need to be made. The algorithm calculates a bounding box for each primitive. If the ray does not intersect with the bounding box of the BVHNode, then the ray will not intersect with any primitives within the bounding box. If it does, then we can focus on ray-intersection calculations within the bounding box.
Constructing the BVH: In order to use this algorithm, we must first construct the BVH tree which is made up of BVHNodes. The input to the method construct_BVH is a list of all the primitives and the termination value, max_leaf_size, which is used to determine when we can stop splitting the primitives (thus reaching BVH leaf nodes). First, I calculate the bounding box for every primitive and update my bounding box with each primitive's bounding box. Next, I create a new root BVHNode with the bounding box of all the primitives combined. If the list of primitives has a size less than the termination value, then our new node is a leaf node. Otherwise, the new node is an internal node with children. To determine the child node that a primitive in the list belongs to, I calculate the split point by dividing the bounding box in half (in the direction of the longest length of the current bounding box). Then, I loop through all the primitives. For each primitive, I calculate its centroid. If the centroid is located to the left of this split point, I add the current primitive into the left child's primitives list, otherwise, I add it to the right child's primitives list. If either child's primitives list is empty, we give the empty child half of the other child's primitives, thus divying up the work as the BVHTree would require less traversal overall (skipping this step could cause infinite recursion). Each child has a recursive call that requires the construction of the sub-BVHNode. After both children are constructed recursively, we return our original BVHNode.
BVH Intersect: These methods are given a ray, node, and an Intersection data structure that tracks the position of intersection (depending on the method that is called). There are several cases for this method:
min_t, max_t values do not intersect with the intersection's values, then the ray calculation can once again terminate as the times do not intersect and so we return false. Intersection data structure, I return true immediately (as this method short circuits). In the method with an intersection, I return true at the end of the for loop. The primitive intersect method I use in this also takes in the Intersection object and sets it to the closest primitive. Following images are all rendered with either:
./pathtracer -t 8 -s 1024 -l 4 -m 6 -r 960 720 ../dae/sky/*.dae
./pathtracer -t 8 -s 1024 -l 4 -m 6 -r 480 360 ../dae/sky/*.dae
Estimating Direct Lighting: This method is given a ray and an intersection object. The goal is to calculate a point's outgoing radiance from the sum of all the ray intersections that were made. I start by looping over all the lights in the scene. If the light is a delta light (meaning that it's a point light source), only 1 sample is generated. Otherwise, ns_area_light samples are created. For each light, I loop through it sample count times and within each inner loop: proceed to calculate the incoming radiance, generate a shadow ray, and check for the non-intersection of the shadow ray and the BVH. If the shadow ray does not intersect with the BVH, then its incoming radiance must contribute to the point's irradiance. If the sampled light point was behind the surface, then we terminate early and do not calculate that sample's irradiance as it is not a factor in the final render. The current light's outgoing radiance is calculated by multiplying together the incoming radiance, the cosine angle of the ray with the normal vector, the BSDF value (reflectance/pi as only a hemisphere is accounted for) and dividing by the probability distribution function of taking a sample at that point. It is important to divide by the pdf as it normalizes the samples. If a sample has a high probability of occurring, then it is likely that more samples will be taken in that location. To account for this, I divide by the pdf. After all the samples have been taken, I take the average of the current light's radiance and add it to the running sum of the pixel's radiance from direct lighting. I then iterate to the next light in the scene. This is computed for all scene lights.
Following images are all rendered with:
./pathtracer -t 8 -s 1024 -l 32 -m 6 -r 480 480 ../dae/sky/CBspheres_lambertian.dae
./pathtracer -t 8 -s 1024 -l 32 -m 6 -r 480 480 ../dae/sky/dragon.dae
./pathtracer -t 8 -s 1024 -l 32 -m 6 -r 480 480 ../dae/keenan/banana.dae
Rendering with different number of light rays and one sample per pixel
|
|
|
|
|
|
|
|
Notable mistakes: I use a while loop to iterate through all the samples for each light. However, for the condition where the sampled light exists behind the surface, I was not decrementing my counter that was used in the while loop. This caused an infinite loop calling the render process to be killed around 22%. This took forever to debug since I was not sure whether I had bugs in previous sections or this one.
Overview: Indirect illumination (also known as global illumination) is the illumination caused by bouncing light rays. With direct illumination, the light ray terminates at the point of intersection. However, with indirect illumination, light rays that have hit a surface has the possibility of reflecting off the surface to contribute to the incoming radiance of other surfaces.
Estimating Indirect Lighting: The method is given an input ray and an Intersection object that encodes information about the ray-primitive intersection. First, I take a sample from the primitive's surface BSDF at the ray-primitive intersection point. The illuminance at this point is the ray's reflectance. The lower the value of the illumination, the greater the likelihood for the ray to terminate. In the real world, the light ray carries less energy after each bounce (or ray-primitive intersection). In order to utilize this information, I generate a probability based off of the illumination value that acts as our Russian Roulette probability. If our coin flip on our Russian Roulette probability returns true, the ray has terminated and I return an empty Spectrum (data structure to encode irradiance and radiance). If the Russian Roulette probability returns false, I need to return a recursively traced ray (as our current light ray must bounce to its next location). To do this, I generate a new ray with origin and direction first translated from objectspace to worldspace. I also set the depth of the newly created ray to one less than the current ray's depth. This prevents our rays from bouncing forever (in the statistically unlikely case that the ray never terminates, the light ray will stop bouncing when the depth is equal to 0). This recursive call creates the approximate radiance of the intersection position. After the recursive call ends, I return the outgoing radiance of that intersection point by multiplying the approximate radiance, the surface BSDF at the intersection point, and the angle between the normal vector and the incomint light vector. The result is normalized by the pdf (see above for explanation) and the termination probability. This normalized result is the outgoing radiance of the intersection position and is returned.
Global illumination renders
|
|
|
max_ray_depth = 0; The light does not bounce upon ray-scene intersection. |
max_ray_depth = 1; The light ray can bounce once upon ray-scene intersection. |
max_ray_depth = 2; The light ray can bounce twice upon ray-scene intersection. Note that the render is brighter due to the indirect illumination. |
max_ray_depth = 3; The light ray can bounce three times upon ray-scene intersection. The render is slightly brighter (not very noticeably). |
max_ray_depth = 100; The light ray can bounce up to 100 times upon ray-scene intersection. There is no noticeable difference between max_ray_depth = 3 and max_ray_depth = 100. With each light ray bounce, the ray is more likely to terminate and so it's unlikely a ray reached 100 bounces. Comparing rendered views with various sample-per-pixel-rates and 4 light rays
|
|
|
|
|
|
Overview: Adaptive sampling is an optimization technique that allows us to terminate sampling early when pixel values have approximately converged. This allows computational power to be spent on sampling the trickier portions of the image. For example, we do not need to repeatedly sample the direct light source but may need to sample the nooks and crannies of a model more.
Adaptive Sampling: This method was implemented in Part 1 of the project. To implement adaptive sampling and allow for the creation of the sample heat map, I wrote to sampleCountBuffer, setting the value to the number of samples that were generated before the pixel converged. For each ray that was cast, I calculated the ray's radiance and added it to a running sum, s1. I also added the square of the radiance to another running sum, s2. Every samplesPerBatch number of times, I would test if the samples' variance was low. A low variance indicates that enough samples have been generated that the pixel has converged. If convergence is determined to be true, I stop taking samples by breaking out of the loop. Otherwise, we continue to generate rays and take more samples. In this method, I average the running sum of the Spectrum with the number of samples it took to converge (as opposed to ns_aa in Part 1).
All rendered with 1 sample/light and max_ray_depth = 5. In the images below, blue means that convergence occurred with a low sample rate while red represents areas that required ns_aa samples.
|
|
|
|