Basic Monte Carlo Path Tracer

In this assignment I implemented a very basic path tracer (I didn't have a basic ray-tracer to extend). Since I started from nothing, most work has been done to get the core part working correctly. But since I have no previous burden on legacy code, I have more freedom in controlling the overall development framework. However I also spent too much time on develop the basic things, so didn’t have enough to implement more important features such as environment light and accelerating structure.

Difficulties encountered

1.      Defining the coordinate system, which I have troubles in interleaving which vector is up in local and global coordinates. This is a bit tricky to debug since we a lot often made z-up, while from the eye-film intersection we have -y-up!

2.      Upper hemisphere sampling, which sounds pretty straightforward, but I have problems with defining local coordinates using global vectors. Since local coordinate is not directly related to object_to_world transform, I need to define it specifically. Finally I use normal as Y-axis, wi/n plane as Y-X plane.

3.      Getting the weighting correct! I think the caveat given near the deadline on how to weight sampled ray correctly is really helpful. I spent several days to see what’s going on with my indirect illumination (since there’s color bleeding, but overall scene looks too dark) and area light sampling. Overall I think the weighting is the most important in getting Monte Carlo Path Tracer working right!

Overall Framework

My framework is influenced by PBRT, since they have nice class hierarchies and interfaces clearly defined. However for the core integrator, there could be dramatic differences. (Since I’m not doing more fancy integration) And I guess they can do much better memory management!?

Here are the core abstract classes implemented, and their derived classes in clauses:

n   Intersectable: Shape, Light

n   Shape:            Plane, Sphere, Box, Triangle, Disk

n   Light:               AreaLight, SquareLight, DiskLight

n   BRDF:             Matte for diffuse, Plastic for Phong, Anisotropic

n   Camera:          PinholeCamera, RealisticCamera

For other core class also includes: Film (no HDR/ToneMapping), Scene, Aggregate (currently no accelerating data structure implemented), Spectrum (only RGB supported) are important. In addition, I used and modified some basic class from PBRT, including Vector, Normal, Ray, Transform, BBox.

The flow for a single ray is started from Scene::Render

1.      Get ray from Camera, where it obtain from Film at certain pixel.

2.      Call Aggregate to trace such ray. The Aggregate class contain every intersectable Shape and Light

3.      Recursive call "Spectrum returnedColor = TraceRay(ray, weight)"

a.      Determine should stop such ray or not, depending on MAX_PATH and Russian Roulette(only check if weight < 0.001, if pass, scale the weight)

b.      Check if the ray intersects anything, if not, return black, otherwise check shadow ray and reflected ray. (refraction is not implemented, but have that in the realistic camera lenses simulation )

i.                Shadow Ray: first sample a point on the AreaLight, and trace a shadow ray to check visibility. If visible, weight the return Le appropriately

ii.                Reflected Ray: sample (using BRDF class) a random direction in the hemisphere, and recursively trace that ray. Weight the returned ray appropriately.

4.      Once TraceRay() returned, set the corresponding pixel color value(since I didn’t implemented shutter time mechanism, we need to adjust the spectrum power of the light from scene to scene, and for different camera.)

Soft shadow is achieved by implementing AreaLight and sampling technique. As noted in class, sampling AreaLight require proper weighting, namely the  term, and divided by sampling PDF with .

In the following example I vary the size of the light source(also increase power of light accordingly), and as can be seen the shadow become softer.

Materials

Basically I implemented 3 kind of material, Matte (diffuse), Plastic (diffuse + specular), and Anisotropic. The first two are more easily understood. For the Anisotropic material, I implemented the reflectance function described in this paper “Measuring and Modeling Anisotropic Reflection” by Greg Ward. The method uses an anisotropic elliptical Gaussian to model the BRDF. In his Eq.5a, reflectance for specular term is scaled (kind of) into brush direction. There is still some bug in my implementation currently, but we can still see some interesting metallic groove feeling on the sphere.

On the example below, the two images demonstrate three different kind of material. On the left I compare Plastic and diffuse sphere, and on the right I compare plastic and the Anisotropic Gaussian material, notice the reflected light extends differently.

 Plastic and Diffuse Plastic and Anisotropic Groove in one direction

In the next example I change in anisotropy into another direction, which cause the light reflected circularly. Here I show light moves from front left to front right and see how the ringing effect moves.

Global Illumination

Here I show the Cornell Box example. The first image is traced using hand tuned scene that mimics the Cornell Box setting (so the powers of light and reflectance coefficient are tuned empirically). Although the spectrum doesn’t look the same, color bleeding, soft shadow, multiple bounce inter-reflectances are modeled properly.

To elucidate each iteration of path-tracing, I duplicate the illustration by Dutre(found in Jason Lawrence’s slide). Note the setting is slightly different from Cornell Box setting, in that they add an additional front wall. (This effect can be found on the nearer smaller box by noticing the front face, which is brighter with front wall in Dutre’s result and darker with Cornell box reference images)

 Cornell Box data setting Dutre’s setting Direct illumination TLe 1st bounce indirect illumination T2Le 2nd bounce indirect illumination T3Le

Up row: my result (for all image I include the light into display). Bottom row: reference result.

Here I also include another clearer example to demonstrate the direct/indirect illumination. In this example, a blocker is placed just above two spheres, with the left completely occluded from the light and the other partially.

In the direct illumination, left sphere receive no rays from the light, and thus complete unseen. The soft shadow around the edges of the blocker is clearly visible, which is not so obvious with global illumination.

In the 1st bounce indirect illumination, we can see the left sphere’s right side and the red wall are seen due to the blue wall reflecting lights. Also the blocker reflects most of the energy back to the light. And finally the right sphere also reflects lights to the bottom of the blocker.

In the 2nd bounce indirect illumination, we can see even more interesting reflectance at the left side of the left sphere. We can see some red tint on it, coming from the 1st bounce on the red wall!

 Blocker occludes lighting Direct illumination TLe (lighting enhanced) 1st bounce indirect  illumination T2Le (lighting enhanced) 2nd bounce indirect  illumination T3Le (lighting enhanced)

Realistic Camera

In additional to the original pinhole camera, I also implemented a realistic-based camera with lens-set. This is described in the paper “A Realistic Camera Model for Computer Graphics” by Craig Kolb, Don Mitchell, and Pat Hanrahan. The basic idea is to simulate several thin lenses in front of the film, and make the rays refracted through them and pass thru the aperture. So to generate ray we need to sample on a circular disk on the last thin lens (closest to the film), weighted the ray properly (using Eq.8), and trace through lenses set. The property of the lenses set is specified in the paper, and also included in the project package as text file. Although the paper mentioned a thick lens approximation, I didn’t implement that. The result includes 3 lenses other than pinhole.

The pinhole camera is boring, and when compared with Double Gauss lens, we have some zoom-in and a little depth of field. For the Fish Eye lens, the scene is severely warped to a circular disk, as the squared light warped into non-rectangular. Also the entire scene is squeezed to the center of the final image. For the Telescope lens, things are (apparently) zoom-in greatly (note that I change the viewing direction a little to enhance the view). Depth of field is greatly enhanced using this lens, as can be seen the most sharp edges in the first box, and gradually blurred as distance increased.

 Pinhole Double Gauss Fish Eye Telescope