University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 3
-
Tested on personal computer - Microsoft Windows 10 Pro, Processor : Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz, 2601 Mhz, 6 Core(s), 12 Logical Processor(s)
GPU : NVIDIA GeForce RTX 2060
Path tracing is a type of ray tracing which takes a random sampling of all of the rays to create the final image. This results in sampling a variety of different types of lighting, but especially global illumination. we see things because light emitted by light sources such as the sun bounces off of the surface of objects. When light rays bounce only once from the surface of an object to reach the eye, we speak of direct illumination. But when light rays are emitted by a light source, they can bounce off of the surface of objects multiple times before reaching the eye. This is what we call indirect illumination because light rays follow complex paths before entering the eye.
Below are the features that I implemented as a part of my path tracer. All features are toggleable in the pathtracer.cu file.
Implemented materials like diffuse, specular with reflective and refractive surfaces. For diffuse, the given ray can bounce off the hemisphere covering the intersection point with equal probability. So, we sample a point on this hemisphere randomly and set the reflected ray direction to be so. For reflective surfaces, the ray bounces off the surface with with the same angle as the angle of incidence from the normal, i.e, angle of reflection should be the same as angle of incidence from the normal. For refractive surfaces, the ray passes through the point of intersection. But, due to total internal reflection, if the angle of incidence is less than critical angle, the surface behaves as a reflective surface and the ray reflects out. In the images below, you can see the spheres and cubes of different materials.
Depth of field is an effect produced by the camera in which the lens focuses on objects only at a certain distance called focal length and blurs the rest of the objects. This is implemented in the pathtracer by sampling a point from the lens as we were sampling it from a square plane and setting that as the ray origin and the direction based on the point of focus.
Bokeh effect can be achieved by using a shaped lens on a regular camera. In our virtual camera, we have to map the points we sample to a particular shape in order to achieve it.
Used tiny_obj_loader to import obj mesh models into the scene. Once we have the triangle data, we check against each triangle in the mesh for intersection with our ray. Since this would lead to numerous computations which are pointless, we cull the number of rays to be tested against all triangles by first checking the ray against the bounding box (min-max corner box) , which is just one simple additional check, to see if the ray intersects the mesh at all.
I have imported the following models into my path tracer. All of these were rendered within 20 mins.
- Outside the box vs Inside the box
In our path tracer, at each bounce, the direction of the next ray is determined based on the material it hits. We do not make any computations to check if the point in space is directly visible to light or not. In direct lighting, we make the last bounce hit the light by selecting a point randomly on a randomly selected light and setting the direction of the final ray to be the sampled point. This way, we will know if the object is directly visible to the the light or not.
We accomplish this by checking the remaining bounces and if it is the last one, instead of sampling the bsdf for the direction, we sample the light instead. This lights up the scene as shown below.
With vs. Without
Anti aliasing is a pixel level operation of adding a small random offset to the current ray direction to get an averaged color for pixels in order to avoid aliasing. The images below show the effect.
We make the rays or path segments contiguous in memory by sorting them based on the type of material the ray currently hits. The graph below depicts the
time taken to trace different depths with and without sorting. I have used 3 different materials in the scene while measuring the performance.
We use stream compaction to eliminate the dead or terminated rays from the pool of rays. The graph below shows how stream compacting rays improves the render time
for different number of iterations.
We store the initial ray bounces in a special cache buffer for reusing it in all the next iterations. The graph below depicts the effect of caching at different iterations
The graph below shows the output with and without Bounding volume intersection culling on our meshes. We can see an improvement in performance when we first perform
a ray test on the bounding box and if the ray succeeds, only then test it against all the mesh triangles. This is not as significant if the number of triangles in the
mesh are pretty low.
These are all the silly renders I came across while working on my path tracer.