Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
millersy authored Oct 10, 2020
1 parent 7f9acbf commit 384014f
Showing 1 changed file with 8 additions and 9 deletions.
17 changes: 8 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,16 @@ Table of contents
* [Stochastic Sampled Antialiasing](#stochastic-sampled-antialiasing)
* [Obj Mesh Loading](#obj-mesh-loading)
* [Procedural Shapes](#procedural-shapes)
* [Procedural Shapes](#procedural-shapes)
* [Procedural Textures](#procedural-textures)
* [Better Hemisphere Sampling](#better-hemisphere-sampling)
* [Optimizations ](#optimizations)
* [Stream Compaction](#stream-compaction)
* [Materials Contigious in Memory](#materials-contigious-in-memory)
* [Materials Contiguous in Memory](#materials-contigious-in-memory)
* [Cache First Intersection](#cache-first-intersection)
* [Mesh Bounding Box](#mesh-bounding-box)
* [Performance Analysis](#performance-analysis)
* [Steam Compaction with Open and Closed Scenes](#steam-compaction-with-open-an-closed-scenes)
* [Chaching First Intersection with Varying Depths](#chaching-first-intersection-with-varying-depths)
* [Caching First Intersection with Varying Depths](#caching-first-intersection-with-varying-depths)
* [Bloopers](#bloopers)

# Overview
Expand All @@ -52,14 +51,14 @@ I implemented diffuse, reflective, and refracting materials for my pathtracer. D
| ------------- | ----------- |
| ![](img/renders/part1Specular.png) | ![](img/renders/DOF.png) |

I implemented physically bassed Depth-of-Field to create a focus effect on an object in the scene with a blurred background. This effect is also known as a thin lens approximation, which simulates a lens with a thickness much smalled than the radius of curvature to create the effect. The depth of field render shown took about 77 ms per iteration.
I implemented physically based Depth-of-Field to create a focus effect on an object in the scene with a blurred background. This effect is also known as a thin lens approximation, which simulates a lens with a thickness much smaller than the radius of curvature to create the effect. The depth of field render shown took about 77 ms per iteration.

## Stochastic Sampled Antialiasing
| No Antialiasing | Antialiasing |
| ------------- | ----------- |
| ![](img/renders/part1Diffuse.png) | ![](img/renders/antialiasing.png) |

I implemented stochastic sampled antialiasing by randomly offseting the values used for a rays origin from the camera when calculating the rays direction. You can tell in the image above that the edges of the sphere in the antialiased image are smoother than the edges of the sphere in the non-antialiased image.
I implemented stochastic sampled antialiasing by randomly offsetting the values used for a rays origin from the camera when calculating the rays direction. You can tell in the image above that the edges of the sphere in the antialiased image are smoother than the edges of the sphere in the non-antialiased image.

## Obj Mesh Loading
![](img/renders/star2500Samples.png)
Expand Down Expand Up @@ -96,15 +95,15 @@ After each bounce of the rays, the array holding the rays will be partitioned so

## Materials Contigious in Memory

There is an option to sort materials the intersections and path segments by material so that threads all following similar divergences in code will be next to each other in memory. This would increase performace because if diverging threads in the same block cannot run in parallel, so the threads would have to wait for each other. If the threads are together in memory then there will be less threads waiting and more threads doing work. A basic scene without memory loading took an average of 76 ms per iteration whereas a scene with memory loading took 212 ms per iteration. I beleive this optimization did not benifit my scenes because they do not have many material. If the scenes were larger with more materials it would help the performance.
There is an option to sort materials the intersections and path segments by material so that threads all following similar divergences in code will be next to each other in memory. This would increase performance because if diverging threads in the same block cannot run in parallel, so the threads would have to wait for each other. If the threads are together in memory then there will be less threads waiting and more threads doing work. A basic scene without memory loading took an average of 76 ms per iteration whereas a scene with memory loading took 212 ms per iteration. I believe this optimization did not benefit my scenes because they do not have many material. If the scenes were larger with more materials it would help the performance.

## Cache First Intersection

There is an option to save the intersections from the first bounce of the first iteration and use them with every subsequent first bounce. Since the rays are always shooting the same out of the camera, these intersections are always the same so we do not neeed to recalculate them. This allows for one less bouonce per iteration, which would increase performance. The graph in the performance analysis section shows that caching the first intersection did slightly speed up performance.
There is an option to save the intersections from the first bounce of the first iteration and use them with every subsequent first bounce. Since the rays are always shooting the same out of the camera, these intersections are always the same, so we do not need to recalculate them. This allows for one less bounce per iteration, which would increase performance. The graph in the performance analysis section shows that caching the first intersection did slightly speed up performance.

## Mesh Bounding Box

When a mesh is loaded, the minimum and maximum x, y, and z values are saved to form an axiss aligned bounding box. When this option is turned on, a ray will first be tested against the bounding box of the mesh to see if there is an intersection. If there is, then all of the triangles in the ray will be tested. This optimization is significant when there are many triangles in a mesh. For the mesh I tested, the average time per iteration without a bounding box was 664 ms and the average time for iteration with a bounding box was 662 ms. I referenced [Scratchapixel](https://www.scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-box-intersection) for an equation to solve a ray box intersection.
When a mesh is loaded, the minimum and maximum x, y, and z values are saved to form an axis aligned bounding box. When this option is turned on, a ray will first be tested against the bounding box of the mesh to see if there is an intersection. If there is, then all of the triangles in the ray will be tested. This optimization is significant when there are many triangles in a mesh. For the mesh I tested, the average time per iteration without a bounding box was 664 ms and the average time for iteration with a bounding box was 662 ms. I referenced [Scratchapixel](https://www.scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-box-intersection) for an equation to solve a ray box intersection.

# Performance Analysis

Expand All @@ -114,7 +113,7 @@ When a mesh is loaded, the minimum and maximum x, y, and z values are saved to f

This chart shows the difference between render times of an open scene and a closed scene using stream compaction. As I predicted, closed scenes took a lot longer to render per iteration because rays are not terminating as quickly compared to an open scene. In an open scene, rays can more easily shoot towards an open area where there will be no intersection, so they will terminate. With a closed scene, however, there is no where for the rays to escape so they will continue to bounce until they have reached the maximum depth.

## Chaching First Intersection with Varying Depths
## Caching First Intersection with Varying Depths

![](img/cacheGraph.png)

Expand Down

0 comments on commit 384014f

Please sign in to comment.