CS 184: Computer Graphics and Imaging, Spring 2017

Project 3-2: Ray Tracer, Part 2

Isabel Zhang, CS184-abe



Overview

The goal of this project is to build a raytracer with the ability to render objects with an environment light source, render objects with more complicated materials (like glass, mirror, microfacet), and simulate camera lenses to show varying depth of fields. This project is built upon my previously implemented raytracer (Project 3-1).

Part 1: Mirror and Glass Materials

Multibounce Effects of Light in Reflective and Refractive Material

The goal of this part was to render objects with mirror and glass material in a physically accurate way.

Implementation: Mirror materials have perfect specular reflection, meaning that the angle at which light rays enter is equal to the angle at which the light ray exits. The material's irradiance depends heavily on global illumination/indirect lighting as the irradiance converges after multiple light ray bounces. Because reflection only change the ray direction, the returned spectrum is not diminished by any lambertian falloff.

Implementation: Glass materials both reflect and refract. Refraction occurs when light rays change direction when moving between different mediums (ex: glass to air or air to glass). This occurs when the two mediums have different densities, causing the velocity of the light ray to change. For glass, it is entirely possible for the light rays to experience total internal reflection, where the light ray reflects within the boundary (yielding no effect to the viewer). The refracted light ray can be calculated with Snell's law. In our case, the light rays travel from air to glass or glass to air. Air has an index of refraction of 1.0 and glass's index of refraction is defined by the material. If total internal reflection does not occur, our glass material both refracts and reflects. The ratio or reflection to refraction varies, so I use Schlick's approximation to calculate the probability of reflection and sample randomly to determine if the material reflects or refracts at that point.

All renders below were made with 1024 samples/pixel and 4 samples/light.

Figure 1.0: m = 0
Figure 1.1: m = 1

Figure 1.0: With a max ray depth of 0, light is not permitted to bounce. As such, neither reflection nor refraction can occur. Because both mirror and glass materials depend on reflection and refraction, the spheres appear black.

Figure 1.1: With a max ray depth of 1, light is permitted to bounce once. Notice that the two spheres look similar in this case. The glass receives the reflected light bounces and is able to refract once. The mirror ball takes on reflected light bounces. Note that the ceiling's reflection is black as it requires at least two light bounces in order for the ceiling's color to appear on the mirrored ball.

Figure 1.2: m = 2
Figure 1.3: m = 3

Figure 1.2: With a max ray depth of 2, light rays can bounce twice at most. Note that the reflection of the ceiling in the mirror sphere is much lighter and better represents its true color. The bottom face of the glass ball is also much brighter. This is likely due to refracted light rays inside the glass sphere.

Figure 1.3: With a max ray depth of 3, light rays can bounce up to three times. Note that the mirror ball has become slightly brighter and the reflection of the glass sphere within the mirror sphere has also become more purple to account for the purple wall. The ground around the glass sphere also collects irradiance from the light rays that exited the glass sphere, leading to the white ring outside the glass sphere. Note that the reflection of the glass sphere (on the mirror ball), does not have the white halo in its shadow.

Figure 1.4: m = 4
Figure 1.5: m = 5

Figure 1.4: With a max ray depth of 4, light rays can bounce up to four times. Note that the mirror ball is slightly brighter. Light rays now exit the glass ball and hit the right wall causing the bright spot on the wall. The glass ball's reflection in the mirror ball now has the white shadow halo.

Figure 1.5: With a max ray depth of 5, light rays can bounce up to five times. This render looks very similar to the image with max ray depth of 4. The glass ball's reflection in the mirror ball is slightly brighter.

Figure 1.6: m = 100

Figure 1.6: With a max ray depth of 100, light rays can bounce up to 100 times. Note that the top of the glass sphere has a slight white blur to account for the reflection of the light. The shadows are also smoother and not as harsh in this render.


Hunting for Bugs:

Issue with sphere intersect. I was not checking if the value I set ray's max_t to was valid before setting.
I was not using R_theta in Schlick's approximation. I was originally only using R_0



Part 2: Microfacet Material

In this section, I rendered microfacet materials. Microfacet materials represent surfaces that are composed of many small micro-mirrors, each reflecting light in the micro-mirror's specular direction. For example, the sheen of the ocean from the air can be attributed to microfacets as the waves each provide their own micro-mirror which together can create specular highlights on the ocean despite its rough surface.

Implementation: I implemented the microfacet model using the following microfacet model:

F(wi): In air-conductors, the Fresnel term is responsible for the color of the microfacet BRDF as the term is wavelength-independent. Subsequently, the spectrum must have a color associated with it. In order to implement the material, I used the following approximation where η and k is dependent on conductor's η and k value at certain wavelengths. The renders shown below use 614 nm, 546 nm and 466 nm to represent the red, green, and blue channels respectively. The θ term corresponds to the angle of the incident ray:
G(wo, wi): This is the shadow-masking term and it represents the shadows created by the microfacets. This term is heavily dependent on the roughness of the material (α) and the microfacet distribution (D(h) term).

D(h): This term represents the normal distribution of the microfacet. Microfacets reflect over the half vector (which is the vector that bisects the incident light ray and the exiting light ray). In this project, I used the Beckmann distribution to represent the microfacet normal distribution. The θh term here represents the angle between the half-vector and the normal:
Using the equations above, the raytracer can adequately represent microfacet materials.


Tuning α Values to Change Microfacet BRDF Appearance

High α values indicates the material is rougher and can lead to a matte appearance. Low α values indicates the material is smoother and can lead to a shinier surface.

The images below are rendered at 128 samples/pixel (top 2) or 1024 samples/pixel (bottom 2) and 1 sample/light with a max ray depth of 6.

Figure 2.1.0: α = 0.5
Figure 2.1.1: α = 0.25
Figure 2.1.2: α = 0.05
Figure 2.1.3: α = 0.005

Uniform Cosine Hemisphere Sampling vs. Importance Sampling

Two different sampling methods can be used when sampling the microfacet BRDF: uniform cosine hemisphere sampling and importance sampling. Uniform cosine hemisphere sampling samples a random point within the hemisphere and calculates the contribution of that sample to the microfacet's irradiance. Because microfacet BRDFs depend heavily on the light, uniform cosine sampling will take a long time to converge and produce more noise in our renders. Instead, importance sampling is used to sample the microfacet BSDF as some areas will take longer to converge, it'll be necessary to take more samples at that location. The areas where we need more samples can be determined by the probability distribution functions that are calculated.
To implement importance sampling, I used Beckmann's equation to produce the probability distribution function of the incident light ray, pw(wi):

θh and ϕh is calculated with r1, r2 which are random numbers between 0 and 1.
pθh), pϕh) represent the probability distribution functions of θh and ϕh. Next, I use these values to find the halfvector, h:
h = (cos(ϕh)*sin(θh), sin(ϕh)*sin(θh), cos(θh)) 
I find the probability distribution functions of h and the incident angle. The closer the probability distribution function of the incident angle is to D(h), the less noise the render will have.


Figure 2.2.0: Uniform hemisphere sampling
Figure 2.2.1: Importance sampling

Uniform cosine hemisphere sampling has a much slower convergence than importance sampling. Comparing Figure 2.2.0 and Figure 2.2.1, notice that the bunny is much noisier when uniform cosine hemisphere sampling is used than when importance sampling is used.


Renders of Different Microfacet Conductors

Different materials have different η and k values. I searched up the η and k values (for all 3 wavelengths) of two conductors, silver and nickel, for the renders below.

Silver (Ag) with η = (0.059193, 0.059881, 0.047366), k = (4.1283, 3.5892, 2.8132)

Nickel (Ni) with η = (1.9874, 1.9200, 1.9200), k = (4.0011, 3.6100, 3.6100)

Silver (Ag) at α = 0.05
Nickel (Ni) at α = 0.05
Silver (Ag) at α = 0.5
Nickel (Ni) at α = 0.5

Bug Hunt:

Using the wrong cosine value and missing invalidity checks caused the dragon to glow very brightly.


Part 3: Environment Light

For this section, the raytracer now has the ability to render with an environment light source. The light information is encoded in the image's texture map. The renders below use the field.exr below.

Implementation: First, I implemented sample_dir() which receives a ray and bilinearly interpolates from the corresponding point on the environment map, outputting the spectrum corresponding to the point on the environment map.

Next, I implemented uniform sampling which generates a random direction within the hemisphere and returns the spectrum that corresponds to the direction's point on the environment map.



Because uniform sampling takes an incredibly long time for values to converge, I implemented importance sampling to concentrate samples towards areas of the environment map with highest incoming radiance.
First, I generated the probability distribution function of the environment map based on the total flux at that point. This was done with a double for loop that looped through the total width and height of the environment map, and setting the pdf of the pixel to be the flux at the current pixel divided by the total flux of the entire environment map.
Probability of (x,y) dependent of flux at that point over total flux.


Next, I computed the marginal distribution. This is represented as a cumulative distribution function which sums up the probabilities of each pixel within the entire environment map. The individual pdfs were calculated previously in the step right above.
Cumulative distribution function of the environment map


Then, I compute the conditional distribution of each pixel in the row. In this case, the rows act independently but within each row, I calculate the cdf of that row (so the pdfs of the pixels of each row sum up to the probability of that row).
Cumulative distribution function of each row of the environment map


Finally, I sample a random row of the environment map using upper_bound which returns a pointer to a value's location. Using this returned value, I can calculate the index of the sampled row by subtracting the first location of my array for marginal distributions from my sample location. I then call upper_bound again to sample the pixel within the row by using the conditional distributions that I calculated in the intialization step. Once again, I subtracted the beginning of my conditional distribution array (at the correct indexed location) from the returned pointer. Now that I have both my row and pixel value, I set that as the (x,y) of the texture map and return the spectrum at that location of the environment map.

Probability Debug file for field.exr

Probability Debug Image

My probability debug for field.exr yielded the following image. My image here does not match the image in the project spec (missing the blue-ish tint on the left-hand side). I looked into the probability_debug() function and it does not seem like the blue channel is modified. As such, I am unsure of whether or not the image below is completely accurate.

Probability Debug file for field.exr

Uniform Cosine Hemisphere Sampling vs. Importance Sampling on Diffuse Material

Uniform cosine hemisphere sampling usually results in higher noise levels as it takes a longer time for convergence to occur. Importance sampling generally has less noise as it converges faster as it takes samples more efficiently, sampling around the light instead of across the entire hemisphere range. However, in the renders, the uniform hemisphere sampling is not significantly more noisy. It did take alot longer to render with importance sampling taking approximately 10 minutes and uniform sampling taking 30+ minutes. The lack of noise could result from the bunny's diffuse material, allowing for better convergence.

Renders below are done with 4 samples/pixel and 64 samples/light.

Uniform Cosine Hemisphere Sampling
Importance Sampling

Uniform Cosine Hemisphere Sampling vs. Importance Sampling on Microfacet Material

Uniform cosine sampling took a significantly longer time than importance sampling. Note the bottom left corner by the bunny's right foot and chest. Uniform sampling has not fully converged in that location as you can see a small white splotch on the baseboard in that area. Also note the importance sampled rabbit has more light reflected near it's bottom (the specular highlight is brighter at that point).

Renders below are done with 4 samples/pixel and 64 samples/light.

Uniform Cosine Hemisphere Sampling
Importance Sampling

Bug Hunt:

Error in microfacet implementation only showed up in this section
Probability Debug: Incorrect calculation of the marginal distribution



Part 4: Depth of Field

In this section, I implemented a lens for the raytracer to simulate depth of field. In the previous images, the camera's lens is equivalent to a pinhole camera allowing everything in the scene to be in focus. However, cameras in the real-world are able to blur and focus on different parts of the scene depending on the focal distance and the aperture of the camera.

Implementation: It's important to note that rays with the same point on the image plane will be focused on the same point on the plane of focus, independent of the spot at which they pass through the lens. Knowing this, we can use the pinhole camera model to assist the calculation of the point of focus.

First, I calculate the pinhole camera's ray direction. This is necessary because I need to find the focus point which is the plane-ray intersection between the plane of focus (at (0, 0, -focalDistance)) and the pinhole camera ray. (Note that the focal distance here is what changes the focus in the pictures). Next, I uniformly sample the lens to determine the origin of my returned ray (rndR and rndTheta are the generated random values).

pLens = (lensRadius * sqrt(rndR) * cos(2.0*PI*rndTheta), lensRadius * sqrt(rndR) * sin(2.0*PI*rndTheta), 0.0);
Then, I calculate the direction of my returned ray which is the difference vector between the point of focus and the sampled lens location. Finally, I normalize my ray's direction and convert from camera space to world space. I also increment my returned ray's origin by the position of my camera.

To ensure that my pathtracer uses my camera, I also modified my raytrace_pixel method to use the thin-lens ray generator.

Varying Camera Focus

Lens radius/Aperture of 0.3; Focal distance at 2.5
Lens radius/Aperture of 0.3; Focal distance at 2.9
Lens radius/Aperture of 0.3; Focal distance at 3.3
Lens radius/Aperture of 0.3; Focal distance at 3.6

Varying Aperture Sizes

Lens radius/Aperture of 0.0; Focal Distance of 4.7
Lens radius/Aperture of 0.044194; Focal Distance of 4.7
Lens radius/Aperture of 0.1250; Focal Distance of 4.7
Lens radius/Aperture of 0.5; Focal Distance of 4.7

Bug Hunt

Issue with calculating the origin of the ray
Bug with calculating point of focus and camera & world conversion