|
The goal of this project is to build a raytracer with the ability to render objects with an environment light source, render objects with more complicated materials (like glass, mirror, microfacet), and simulate camera lenses to show varying depth of fields. This project is built upon my previously implemented raytracer (Project 3-1).
The goal of this part was to render objects with mirror and glass material in a physically accurate way.
Implementation: Mirror materials have perfect specular reflection, meaning that the angle at which light rays enter is equal to the angle at which the light ray exits. The material's irradiance depends heavily on global illumination/indirect lighting as the irradiance converges after multiple light ray bounces. Because reflection only change the ray direction, the returned spectrum is not diminished by any lambertian falloff.
Implementation: Glass materials both reflect and refract. Refraction occurs when light rays change direction when moving between different mediums (ex: glass to air or air to glass). This occurs when the two mediums have different densities, causing the velocity of the light ray to change. For glass, it is entirely possible for the light rays to experience total internal reflection, where the light ray reflects within the boundary (yielding no effect to the viewer). The refracted light ray can be calculated with Snell's law. In our case, the light rays travel from air to glass or glass to air. Air has an index of refraction of 1.0 and glass's index of refraction is defined by the material. If total internal reflection does not occur, our glass material both refracts and reflects. The ratio or reflection to refraction varies, so I use Schlick's approximation to calculate the probability of reflection and sample randomly to determine if the material reflects or refracts at that point.
All renders below were made with 1024 samples/pixel and 4 samples/light.
|
|
Figure 1.0: With a max ray depth of 0, light is not permitted to bounce. As such, neither reflection nor refraction can occur. Because both mirror and glass materials depend on reflection and refraction, the spheres appear black.
Figure 1.1: With a max ray depth of 1, light is permitted to bounce once. Notice that the two spheres look similar in this case. The glass receives the reflected light bounces and is able to refract once. The mirror ball takes on reflected light bounces. Note that the ceiling's reflection is black as it requires at least two light bounces in order for the ceiling's color to appear on the mirrored ball.
|
|
Figure 1.2: With a max ray depth of 2, light rays can bounce twice at most. Note that the reflection of the ceiling in the mirror sphere is much lighter and better represents its true color. The bottom face of the glass ball is also much brighter. This is likely due to refracted light rays inside the glass sphere.
Figure 1.3: With a max ray depth of 3, light rays can bounce up to three times. Note that the mirror ball has become slightly brighter and the reflection of the glass sphere within the mirror sphere has also become more purple to account for the purple wall. The ground around the glass sphere also collects irradiance from the light rays that exited the glass sphere, leading to the white ring outside the glass sphere. Note that the reflection of the glass sphere (on the mirror ball), does not have the white halo in its shadow.
|
|
Figure 1.4: With a max ray depth of 4, light rays can bounce up to four times. Note that the mirror ball is slightly brighter. Light rays now exit the glass ball and hit the right wall causing the bright spot on the wall. The glass ball's reflection in the mirror ball now has the white shadow halo.
Figure 1.5: With a max ray depth of 5, light rays can bounce up to five times. This render looks very similar to the image with max ray depth of 4. The glass ball's reflection in the mirror ball is slightly brighter.
|
Figure 1.6: With a max ray depth of 100, light rays can bounce up to 100 times. Note that the top of the glass sphere has a slight white blur to account for the reflection of the light. The shadows are also smoother and not as harsh in this render.
|
|
In this section, I rendered microfacet materials. Microfacet materials represent surfaces that are composed of many small micro-mirrors, each reflecting light in the micro-mirror's specular direction. For example, the sheen of the ocean from the air can be attributed to microfacets as the waves each provide their own micro-mirror which together can create specular highlights on the ocean despite its rough surface.
Implementation: I implemented the microfacet model using the following microfacet model:
|
|
|
High α values indicates the material is rougher and can lead to a matte appearance. Low α values indicates the material is smoother and can lead to a shinier surface.
Figure 2.1.0 has the highest alpha (α = 0.25) and the dragon appears matte with some sheen. The image is also the least noisy of the four displayed. Figure 2.1.1 has a α = 0.25. Notice that the dragon appears slightly darker and is affected by the global illumination more when compared to Figure 2.1.0. Global illumination can be seen by the purple tint on screen-right and red tint on screen-left (by the dragon's belly). Figure 2.1.2 has α = 0.05. The dragon is much shinier at this point. The Cornell Box is made up of 5 faces, with our camera sitting on the side of the missing face. Due to the shininess of the microfacet material, we can see black reflected on the neck of the dragon. This is due to the lack of light rows coming in from that direction. We can also see that this image is much noisier than Figure 2.1.1. This is to be expected as low α's' is subject to more noise. Figure 2.1.3 has α = 0.005. This dragon is significantly darker than Figure 2.1.0. It also takes on a significantly more purple color on its side when compared to Figure 2.1.0 and Figure 2.1.1.The images below are rendered at 128 samples/pixel (top 2) or 1024 samples/pixel (bottom 2) and 1 sample/light with a max ray depth of 6.
|
|
|
|
Two different sampling methods can be used when sampling the microfacet BRDF: uniform cosine hemisphere sampling and importance sampling. Uniform cosine hemisphere sampling samples a random point within the hemisphere and calculates the contribution of that sample to the microfacet's irradiance. Because microfacet BRDFs depend heavily on the light, uniform cosine sampling will take a long time to converge and produce more noise in our renders. Instead, importance sampling is used to sample the microfacet BSDF as some areas will take longer to converge, it'll be necessary to take more samples at that location. The areas where we need more samples can be determined by the probability distribution functions that are calculated.
To implement importance sampling, I used Beckmann's equation to produce the probability distribution function of the incident light ray, pw(wi):
|
|
h = (cos(ϕh)*sin(θh), sin(ϕh)*sin(θh), cos(θh))
|
|
|
|
Uniform cosine hemisphere sampling has a much slower convergence than importance sampling. Comparing Figure 2.2.0 and Figure 2.2.1, notice that the bunny is much noisier when uniform cosine hemisphere sampling is used than when importance sampling is used.
Different materials have different η and k values. I searched up the η and k values (for all 3 wavelengths) of two conductors, silver and nickel, for the renders below.
Silver (Ag) with η = (0.059193, 0.059881, 0.047366), k = (4.1283, 3.5892, 2.8132)
Nickel (Ni) with η = (1.9874, 1.9200, 1.9200), k = (4.0011, 3.6100, 3.6100)
|
|
|
|
|
For this section, the raytracer now has the ability to render with an environment light source. The light information is encoded in the image's texture map. The renders below use the field.exr below.
Implementation:
First, I implemented sample_dir() which receives a ray and bilinearly interpolates from the corresponding point on the environment map, outputting the spectrum corresponding to the point on the environment map.
Next, I implemented uniform sampling which generates a random direction within the hemisphere and returns the spectrum that corresponds to the direction's point on the environment map.
|
|
|
upper_bound which returns a pointer to a value's location. Using this returned value, I can calculate the index of the sampled row by subtracting the first location of my array for marginal distributions from my sample location. I then call upper_bound again to sample the pixel within the row by using the conditional distributions that I calculated in the intialization step. Once again, I subtracted the beginning of my conditional distribution array (at the correct indexed location) from the returned pointer. Now that I have both my row and pixel value, I set that as the (x,y) of the texture map and return the spectrum at that location of the environment map.
|
My probability debug for field.exr yielded the following image. My image here does not match the image in the project spec (missing the blue-ish tint on the left-hand side). I looked into the probability_debug() function and it does not seem like the blue channel is modified. As such, I am unsure of whether or not the image below is completely accurate.
|
Uniform cosine hemisphere sampling usually results in higher noise levels as it takes a longer time for convergence to occur. Importance sampling generally has less noise as it converges faster as it takes samples more efficiently, sampling around the light instead of across the entire hemisphere range. However, in the renders, the uniform hemisphere sampling is not significantly more noisy. It did take alot longer to render with importance sampling taking approximately 10 minutes and uniform sampling taking 30+ minutes. The lack of noise could result from the bunny's diffuse material, allowing for better convergence.
Renders below are done with 4 samples/pixel and 64 samples/light.
|
|
Uniform cosine sampling took a significantly longer time than importance sampling. Note the bottom left corner by the bunny's right foot and chest. Uniform sampling has not fully converged in that location as you can see a small white splotch on the baseboard in that area. Also note the importance sampled rabbit has more light reflected near it's bottom (the specular highlight is brighter at that point).
Renders below are done with 4 samples/pixel and 64 samples/light.
|
|
|
|
In this section, I implemented a lens for the raytracer to simulate depth of field. In the previous images, the camera's lens is equivalent to a pinhole camera allowing everything in the scene to be in focus. However, cameras in the real-world are able to blur and focus on different parts of the scene depending on the focal distance and the aperture of the camera.
Implementation:
It's important to note that rays with the same point on the image plane will be focused on the same point on the plane of focus, independent of the spot at which they pass through the lens. Knowing this, we can use the pinhole camera model to assist the calculation of the point of focus.
First, I calculate the pinhole camera's ray direction. This is necessary because I need to find the focus point which is the plane-ray intersection between the plane of focus (at (0, 0, -focalDistance)) and the pinhole camera ray. (Note that the focal distance here is what changes the focus in the pictures). Next, I uniformly sample the lens to determine the origin of my returned ray (rndR and rndTheta are the generated random values).
pLens = (lensRadius * sqrt(rndR) * cos(2.0*PI*rndTheta), lensRadius * sqrt(rndR) * sin(2.0*PI*rndTheta), 0.0);Then, I calculate the direction of my returned ray which is the difference vector between the point of focus and the sampled lens location. Finally, I normalize my ray's direction and convert from camera space to world space. I also increment my returned ray's origin by the position of my camera.
raytrace_pixel method to use the thin-lens ray generator.
|
|
|
|
|
|
|
|
|
|