I’ve spent a bit more time both getting some decent volumetrics source data (via Mantaflow for the fluid simulations and better self-created procedural clouds for the images below) and improving Imagine’s Volumetrics rendering capabilities.
Below are two animations rendered of smoke fluid simulations:
I’ve added importance sampling (not MIS yet - currently there’s a separate integrator for anything with volumetrics in it) of mediums, so noise is reduced a bit, and I’ve optimised several things - calculating the transmission integration through the medium for lighting is now done with double the step distance than Camera and other GI rays use (I could probably do the same for diffuse GI rays in the future), and I’ve added data window extents to my voxelbuffer format, both of which together give significant speedups (the latter especially with trilinear filtering and sparse volume extent).
I’ve also added initial emission support, but the results are currently pretty noisy.
I’ve now got scene-wide homogeneous single-scattering and multiple-scattering of heterogeneous volumes rendering in Imagine.
I’m not doing importance sampling yet, so there’s room for improvement there, and I’m not taking possible emission into account yet - when I do this I might try and get a black body shader working for the emission values.
The above image is a procedural pyroclastic cloud - for the moment I’m not going to spend too much time modelling volumes, as that’s quite a task in itself, but I’ve implemented a dense grid and at some point, I’ll try and implement a sparse grid for better memory-efficiency and storage. I could have used Field3D’s versions, but the dense version is trivially simple anyway, and ignoring memory limits and paging in the sparse version, I don’t foresee that being that difficult either, plus the dependency on HDF5 is more trouble than it’s worth.
I was tempted to try to get OpenVDB integrated, but the Boost and Intel TBB dependencies put me off for the moment, but it seems a nice solution.
Above is a bound single-scattering medium in a Cornell Box, with the cliché spotlight.
For scene-wide mediums, there’s an interesting limitation which in hindsight is obvious, but I hadn’t thought of before: image-based (or environment) lighting doesn’t really work. It definitely doesn’t work when you set the light distance to infinity or large values, and while it’s usable to a limited extent at much closer values, it’s not really practical. So I’m not really sure how useful scene-wide mediums are in practice - in production, they seem to be mainly used for god-rays anyway, so…
I’ve created a translucent material type for Imagine which allows a more artist-friendly way of specifying the colour and transparency of translucent objects, without having to work out the very unintuitive (until you understand what’s going on internally) absorption and scattering coefficients that control the medium interaction for scattering events and the resulting transmission.
I’m pretty certain it’s not physically-accurate, but it seems to give pretty pleasing results, although I’m still not convinced of the correctness of my implementation, as it’s very easy to produce extremely noisy results.
I got a copy of Volume Rendering for Production for Christmas, so I’m going to be looking into heterogeneous volumes in the future.
I’ve just got Subsurface Scattering working via volumetric rendering with the Henyey Greenstein phase function. This method is completely cache-less (unlike the Dipole method), and I’m currently using single-scattering and attenuated direct illumination through mediums to work out the lighting.
The surface interactions aren’t currently physically-accurate, as I’m just using a diffuse (cosine-weighted hemisphere direction) transmissive BSDF (with no refraction) to allow rays to enter the mesh and the medium, instead of using the BSDF and any IOR of the medium to work out the entry direction through the surface correctly, but the results look pretty good, and most of the infrastructure’s in place now.