Oren Nayar BRDF

I’ve had this code kicking around for ages, but it never worked correctly, due to a bracket in the wrong place! Finally got it working, so Imagine can do realistic diffuse rough surfaces like clay.

The image below shows spheres with incremental roughness values, and a terracotta dragon.

Oren Nayar BRDF example rendering




Displacement

Displacement example rendering

I’ve now added initial support for displacement of geometry after subdivision using displacement maps. It’s far from perfect yet - it’s quite slow when subdividing to the number of levels required for decent results (the two cubes above are 6.2m faces, subdivided 10 times), and the Half-Edge data structures I’m using for subdivision use a fair amount of memory, and there’s a very slight amount of faceting that doesn’t look normal.

The first two issues (speed and memory consumption) I can probably fix quite easily by making my Half Edge data structures more compact and efficient - currently, each Half Edge Vertex, Half Edge Edge and Half Edge Face store pointers to each other which while easy, consumes more memory than required. Converting them to using offset indexes (like my KDTree does) should bring memory usage down by half (instead of storing an 8-byte pointer, it’s possible to store a 4-byte uint_32 for an offset into a table).

It’s also not on-the-fly micro-polygon displacement - that’s more difficult (and slower while rending), can only be done on triangles (loop SubD is the normal variant) but uses much less memory and allows easy adaptive stopping criteria by checking each edge length in camera-space to see whether it’s smaller than a pixel yet.

There’s also vector displacement (displacement in three dimensions, as opposed to simply along the vertex’s normal), which should be pretty trivial to implement.



Subdivision

Imagine has had support for Catmull-Clark subdivision of quads within its interface for a while now, but the previous implementation had several limitations: it was slow and memory-inefficient (it just brute-forced the subdivision from the original faces), it didn’t support open shapes with holes and it didn’t keep UVs.

I’ve now re-written this to use a half-edge mesh structure, which makes it faster (less duplicate work is done), it supports boundary edges (holes in geometry), supports triangles and it subdivides UV positions (linearly at preset). I’ve also added support for linear subdivision with no smoothing.

Once I sort out the Geometry Instance infrastructure for each object in order to allow Geometry Modifiers at render time, it should be fairly simple to add Displacement support.

Subdivision example



Wireframe Shader

Discussion at work led to talking about implementing a wireframe shader in Nuke, so I decided to see how difficult it would be in Imagine. As long as the mesh consists of polygons of a single type - i.e. triangles or quads: not difficult at all it turns out.

For triangles, it’s easy enough to emulate a wireframe surface by simply working out how close a hit position on a triangle is to an edge by transforming the triangle’s points into world space and then using the standard point-to-line method of a perpendicular vector to each edge. This gives you the distance in world space of the hit position to each edge, and based on which distance is the closest, you can then apply a step function to the colour of the surface, based on the distance and a line thickness amount.

Wireframe Triangle pattern

For quads, I first tried using the same algorithm as for triangles, but ignoring the edge of the triangle that was shared within the quad. This worked to some degree, but each quad had two opposing wedges in the corners where the point-to-line formula meant that parts of some edges weren’t shaded correctly.

So instead, I decided to use the Barycentric coordinates of the hit position within the triangle. This allowed me to correctly isolate all four edges of the quad based on a fixed threshold, but I then had to work out the line width and keep it uniform to the length of any edges. In the end I multiplied the Barycentric coordinates of both the the hit position and the inverse hit position (for the opposing edge of the quad) by the length of each of the non-shared edges of the triangle, giving a distance. The smallest of these distances I then used to step the colour, as I did for triangles. While this might not be perfectly accurate and work in all situations, it seems to work very well in practice and also allowed me to (almost) match the line thickness to the triangle method. It also looks very nice:

Wireframe Quads pattern

There’s a very slight (~1%) overhead in shading, as triangles have to be fetched and transformed to world space, but both of the renderings above at full HD finished in under six minutes with 676 samples per pixel.

When I sort out the texturing infrastructure to make it more flexible, it should be very easy to apply this texture as an alpha texture for a fully-3-dimensional mesh that is able to be seen through and cast shadows.



Reconstruction Filtering

After recently trying to render an animation with Imagine, the result consisting of heavy aliasing in the form of walking and flickering edges on small objects, I realised that I couldn’t put off adding proper Reconstruction Filtering to Imagine any longer.

With no filtering, each sample which contributes to a pixel’s final colour has equal weighting to the final result, which leads to heavy aliasing as objects move across the frame. Using Reconstruction Filtering, the sample’s weighting to the final result is related to its distance from the centre of the pixel.

There are many different kinds of filters that can be used for Reconstruction Filtering, ranging from Box filtering (which is the same as no filtering - at least with a default filter width of 1), Gaussian (smooth, slightly blurred) to Lanczos Sinc (very sharp, having negative lobes). Unfortunately, no filter is perfect for every scenario, although generally for stills, you can get away with sharper filters.

I’ve implemented Triangle, Gaussian and Mitchell Netravali filters currently, which along with Box (no filtering) you can see enlarged examples of below:

Box:

Box filter

Triangle:

Triangle filter

Gaussian:

Gaussian filter

Mitchell Netravali:

Mitchell Netravali filter



Hidden Progress

A rendering of the Lucy Model

I’ve been doing a lot of back-end work to Imagine recently, with no new obvious features to show in terms of output images, but a thorough refactoring of some of the core code. I’ve also added remote rendering support, Deep Image (OpenEXR 2) output, adaptive rendering and refinement-progressive rendering.



Dragons (Instanced Geometry)

Instanced Dragons positioned in the shape of the model

Imagine has had support for instanced geometry since very early on, so this is nothing new, but I’ve added to the interface the ability to create instance copies in the shape of other mesh shapes, which shows off the feature more impressively.

At some point, it would be nice to have a look at adding support for recursive procedurals like Katana supports, for infinite scenes (dragons made up of dragons, made up of dragons, etc, etc.)



Motion Blur

Motion Blur example render

I finally got the transformation infrastructure sorted out in order to facilitate adding transformation motion blur support to Imagine. Each ray sample per-pixel has a stratified time sample over the shutter’s open duration. There’s a bit of overhead, as each ray (if it hits the expanded boundary box of the objects bounds for the shutter time) has to interpolate the object’s transform matrix for its time delta.

In addition, as with depth-of-field circle-of-confusion sampling, care has to be taken so that the samples don’t interfere/correspond too closely with the camera’s pixel sample positions, otherwise aliasing occurs. Randomly shuffling the positions seems to do quite nicely.



Environment Light Importance Sampling

The majority of the time spent when raytracing is generally intersecting rays with the scene and the objects in the scene. The two main ways to reduce the time taken to raytrace scenes are to either reduce the cost of each ray intersection (by using acceleration structures), or to send less rays.

Sending less rays obviously has a direct impact on the speed of rendering the scene, but care has to be taken to make sure image quality doesn’t suffer as a result: normally more samples per-pixel are used in order to reduce the noise (or variance) in an image. There are a variety of techniques that can be used to maximise the effectiveness of the rays sent out into the scene from adaptive super-sampling (sending out more rays per-pixel when the samples within the pixel are dissimilar) to efficient sampling (by ensuring that there’s a good distribution of samples across all sampling dimensions) through to importance sampling.

Importance Sampling is a technique whereby samples are selected so that a variable that has the most effect on the variance of the overall end result has the most distribution.

An excellent example of this is with environment lighting - if we use the following environment map for lighting a scene:

Environment Map Stress Test Example

there are only two very small areas of illumination coming from the map. Without using importance sampling - by randomly sampling locations from the map - there is a low probability that any of the samples will actually correspond to the two coloured dots on the texture map. This means that either the image will be too dark (and won’t evaluate the two coloured dots correctly), or there will be very high variance where a few samples that sample the light pick up one of the coloured spots, but most don’t. This will result in extreme noise, as can be seen in this render without importance sampling:

A rendering lit with the test environment map without importance sampling

With importance sampling, it is possible to build up a two-dimensional map of importance based on the corresponding luminance of the environment map, which allows any light sample for the environment map to map to an actual position on the texture map where there is colour - i.e. one of the two coloured spots. The weighting of the function has to be modified as well, in order to not bias the equation (by assuming that the entire image is yellow and blue with no black) which would give unrealistic results.

The final result with importance sampling can be seen below:

A rendering lit with the test environment map with importance sampling

Both images were rendered with 32 samples per-pixel, and the noise-reduction importance sampling gives in this case is striking.




Archive
Full Index

2024 (6)
2023 (7)
2022 (3)
2021 (5)
2020 (4)
2019 (7)
2017 (1)
2016 (2)
2015 (1)
2014 (9)
2013 (10)
2012 (7)


Tags List