Heatmap / Continuous Colour Scale Gradient Palettes

After previously implementing support for image value remapping in my image-processing infrastructure, I’ve been looking into generating better gradient colour mapping schemes / palettes than the initial hard-coded HSV ramp I used.

The thermal camera I have can bake in one or two pretty nice presets, but as previously discussed, I wanted the freedom and flexibility to “tone-map” them myself afterwards, for more creative freedom.

There are a wide range of colour schemes in various plotting and visualisation libraries and tools, with different schemes being either more- or less-suitable depending on the use-case and the values being represented. In particular, having a lot of different strong colours in a gradient can make the resultant mappings look pretty or striking, but it is then often less easy to perceive the magnitude of values compared to more simpler gradients – but on the other hand does have the advantage that any value differences are mapped to a more obvious change in colours.

More recent/modern colour mapping schemes have aimed to improve aspects such as being perceptually uniform, and being more friendly to those suffering from colourblindness, whilst still producing visually appealing results.

The docs page for Plots in Julia has a nice set of example colour schemes / palettes, so I decided to try copying some of the more interesting colour mappings from those schemes to see how they looked, essentially by just interpolating splines of linear colour values to evaluate the gradients, which for my purposes should be sufficient.

Example gradient strips:

Hue (reversed):

HueSV gradient ramp

At first glance, this gradient palette that varies the Hue colour looks nice, but the similarities in redness at either end (red and magenta), due to the Hue of HSV being a circular map (or mapped to a cone or cylinder) mean that the colours at the extremities are somewhat difficult to tell apart, especially for those who are affected by colour blindness. It’s also more difficult to interpret a “rainbow” of colours to gauge their magnitude, so in practice this gradient doesn’t seem that suited for many visualisations.

Heat:

Heat gradient ramp

This is very effective and simple for both thermal images and density plots / maps, and is essentially a limited, rough approximation of “black-body” temperature emission colour (it stops at white, rather than continuing on to blues).

Inferno:

Inferno gradient ramp

This is also very effective, and seems to closely match one of the presets my thermal camera can bake into its images.

Jet:

Jet gradient ramp

A colourful gradient that has better colour separation at either end than Hue, although still likely suffers from the “too many colours” issue for the use-case of gauging value magnitude compared to more simpler schemes.

Plasma:

Heat gradient ramp

A more limited gradient.

Turbo:

Heat gradient ramp

Visually quite pleasing, but again, as this was largely designed for visualising disparity and depth maps, it isn’t primarily aimed at being able to easily gauge value magnitude, but is an improvement over Jet.

Viridis:

Heat gradient ramp

A very pleasing colour gradient that seems to work well for density plots.

Thermal Image examples:

Collage of example thermal image colour schemes Clockwise from top-left: Hue, Heat, Inferno, Turbo, Plasma, Jet.

Shipping density map plot examples:

Collage of example shipping density map plot colour scenes Clockwise from top-left: Heat, Inferno, Jet, Viridis, Turbo, Plasma.

So far, from the colour gradient schemes I’ve played around with, I think Heat and Inferno are my favourite schemes for thermal images, and they seem to additionally work very well for density plots and maps too. Viridis may also become one of my go-to colour gradient schemes for density plots, as that appears to also produce nice results.



Great-Circle Route Visualisation with three.js

Great-Circle Route Image

Over the past few years, as well as learning the Rust programming language, I’ve also been attempting to learn more about the JavaScript ecosystem for front-end web development as well, although the seemingly large number of different libraries/frameworks available has made that a somewhat daunting task in terms of starting point. Because of this, I had decided not to just learn generic and popular web frameworks that were the most popular ones at the time of learning, but to explicitly use ones that seemed to match well with particular use-cases I had in mind.

In this case, I’ve been writing a basic web app for visualising Great-Circle Routes (link to app here) between common airports. There are various existing ones on the web in some form or another, but many produce only flat 2D visualisations, and while there are one or two 3D ones showing the routes on a 3D globe, they weren’t exactly to my liking, and regardless, it provided me with an interesting use-case with which to gain some more experience with JavaScript and web-based real-time rendering.

I did at first think about sticking to core vanilla JavaScript and using WebGL directly with no use of additional frameworks/libraries, but given I wanted something working fairly quickly, I decided that using a library would be the more efficient solution, and a quick look at https://threejs.org/ and its capabilities re-enforced that decision.

In the end, it was pretty simple to come up with a basic working example, with a sphere mesh object with a diffuse earth texture on, with an orbit-able camera, with “route” line meshes representing the Great-Circle Route and “flat” 2D route in question being drawn based off spherical coordinates in 3D space from source latitude and longitude coordinates. The Great-Circle Route between two points was calculated by using the Haversine Formula.

The most “complicated” thing in the implementation ended up being the Airport text labels: while it was simple enough to use THREE.CSS2DObject to construct high-quality DOM text items that appear in the correct 2D screen position based off the camera/earth position of where the respective airports are on the sphere, occlusion/visibility handling isn’t done automatically with this setup, as the DOM elements are not actually part of the 3D scene, they’re layered on top. So in order to hide the label text of airports when they’re occluded by the Earth sphere geo (i.e. when they’re around the back of the Earth to the camera), I ended up having to send Raycaster queries to detect the visibility of the position of the airports from the camera, and adjust the visibility of the Airport label CSS2DObject objects based off the result of these queries within the main render loop. This was still quite easy to do however, and seems to work well enough, although doing the raycaster queries within the render loop is probably not the most optimal solution in terms of efficiency, so I might need to look into better ways of handling that.

There’s still room for improvement though: the thickness of the route lines should ideally scale with the zoom level in screen-space, and the fixed colour of the lines means it’s not always easy to see them depending on the Earth texture being used underneath them, so I might have to look into some kind of contrast/blending setup in the future in order to make the route lines easier to see.



Light Transport / Light Sampling debug visualisation

I’ve just finished implementing the first pass of the ability to record events that happen during light transport, and visualise them in Imagine’s OpenGL viewer. I’d been having issues with light sampling issues with complex light setups with many lights, and just looking at the code and debugging what was going on from within gdb wasn’t really getting me anywhere.

So I decided that as Imagine has a GUI (in non-embedded mode, anyway), it would be fairly easy to record events like path vertices, light sample positions, etc, during light transport and then later display them in the viewer. I ended up creating a separate DebugRenderer integrator for this, which I’m not completely happy with as it means code duplication in order to replicate light transport and light sampling, but integrating this event recording (at a fine grained level) brought with it some time overheads - definitely when recording events, as expected, but it was also even slightly noticeable when it was turned off in the light sampling code, presumably due to extra branching - but more importantly it complicated the existing code quite a bit, so for the moment I’m going to keep it like this.

It’s allowed me to very easily identify issues with spherical analytical light sampling in Imagine, as well identifying some other light sampling issues, and with some future improvements to support volume scattering events as well as better view-time filtering (to allow constraining the preview of these events to useful subsections), it should be better still.

The below images show the rendered scene and the visualisation of a sub-set of rays hitting the top of a cube for the first hit event from the camera: blue lines are camera rays, green lines are successful next event estimation rays which found a light and red lines show next event estimation rays which were occluded. It can also show un-sample-able light samples (back-facing), secondary bounce rays (tagged/coloured diffuse, glossy or specular) and exit rays which didn’t hit anything in the scene.

Example Beauty Render showing lighting

Example Light Transport / Light Sampling visualisation




Archive
Full Index

2024 (6)
2023 (7)
2022 (3)
2021 (5)
2020 (4)
2019 (7)
2017 (1)
2016 (2)
2015 (1)
2014 (9)
2013 (10)
2012 (7)


Tags List