Heatmap / Continuous Colour Scale Gradient Palettes

After previously implementing support for image value remapping in my image-processing infrastructure, I’ve been looking into generating better gradient colour mapping schemes / palettes than the initial hard-coded HSV ramp I used.

The thermal camera I have can bake in one or two pretty nice presets, but as previously discussed, I wanted the freedom and flexibility to “tone-map” them myself afterwards, for more creative freedom.

There are a wide range of colour schemes in various plotting and visualisation libraries and tools, with different schemes being either more- or less-suitable depending on the use-case and the values being represented. In particular, having a lot of different strong colours in a gradient can make the resultant mappings look pretty or striking, but it is then often less easy to perceive the magnitude of values compared to more simpler gradients – but on the other hand does have the advantage that any value differences are mapped to a more obvious change in colours.

More recent/modern colour mapping schemes have aimed to improve aspects such as being perceptually uniform, and being more friendly to those suffering from colourblindness, whilst still producing visually appealing results.

The docs page for Plots in Julia has a nice set of example colour schemes / palettes, so I decided to try copying some of the more interesting colour mappings from those schemes to see how they looked, essentially by just interpolating splines of linear colour values to evaluate the gradients, which for my purposes should be sufficient.

Example gradient strips:

Hue (reversed):

HueSV gradient ramp

At first glance, this gradient palette that varies the Hue colour looks nice, but the similarities in redness at either end (red and magenta), due to the Hue of HSV being a circular map (or mapped to a cone or cylinder) mean that the colours at the extremities are somewhat difficult to tell apart, especially for those who are affected by colour blindness. It’s also more difficult to interpret a “rainbow” of colours to gauge their magnitude, so in practice this gradient doesn’t seem that suited for many visualisations.

Heat:

Heat gradient ramp

This is very effective and simple for both thermal images and density plots / maps, and is essentially a limited, rough approximation of “black-body” temperature emission colour (it stops at white, rather than continuing on to blues).

Inferno:

Inferno gradient ramp

This is also very effective, and seems to closely match one of the presets my thermal camera can bake into its images.

Jet:

Jet gradient ramp

A colourful gradient that has better colour separation at either end than Hue, although still likely suffers from the “too many colours” issue for the use-case of gauging value magnitude compared to more simpler schemes.

Plasma:

Heat gradient ramp

A more limited gradient.

Turbo:

Heat gradient ramp

Visually quite pleasing, but again, as this was largely designed for visualising disparity and depth maps, it isn’t primarily aimed at being able to easily gauge value magnitude, but is an improvement over Jet.

Viridis:

Heat gradient ramp

A very pleasing colour gradient that seems to work well for density plots.

Thermal Image examples:

Collage of example thermal image colour schemes Clockwise from top-left: Hue, Heat, Inferno, Turbo, Plasma, Jet.

Shipping density map plot examples:

Collage of example shipping density map plot colour scenes Clockwise from top-left: Heat, Inferno, Jet, Viridis, Turbo, Plasma.

So far, from the colour gradient schemes I’ve played around with, I think Heat and Inferno are my favourite schemes for thermal images, and they seem to additionally work very well for density plots and maps too. Viridis may also become one of my go-to colour gradient schemes for density plots, as that appears to also produce nice results.



Trip to Perth

Last week I returned from a trip to Perth, Western Australia, where I explored the city and the outer suburbs, as well as drove up and down the coast a bit. This was the last major Australian city I hadn’t really spent more than an afternoon in until now (I’d previously twice spent a night in Perth, once due to a flight diversion), and I also managed to spend a day on Rottnest Island and saw some of the quokkas there.

The weather was reasonably good, and I managed to take quite a few photos, which I’ve just finished the initial run of processing, although it turns out my EF 24-70mm f/2.8L II lens has started having minor (but quite noticeable when looking at the images on a large monitor) issues with focusing, which is a bit annoying. It means a fair number of the photos aren’t as sharp/focused as they should be, so I might need to have the lens looked at if it continually has this issue in the future.

Perth Town Hall:

Perth Town Hall building

Longreach Bay, Rottnest Island:

Longreach Bay, Rottness Island

Quokka:

Quokka

Elizabeth Quay & Elizabeth Quay Bridge:

Elizabeth Quay



Image Value Remapping

I’ve finally got around to finishing pretty comprehensive support for “remapping” pixel values from images within my image-processing infrastructure, so as to be able to scale or remap pixel values in an image to another representation.

It supports both “linear scalar” scaling, as well as scalar-to-colour (i.e. to an HSV gradient ramp) and scalar to custom configurable colour gradients, in addition to being able to do additional pre-remapping scale and fit value modifications - i.e. re-fitting values from a data value input range of [-5000.0, 20000.0] to [0.0, 1.0] first and then applying scalar-to-colour gradient remapping afterwards.

This was mainly for two particular use-cases I had: the first was some more map visualisations I’m playing with - in particular some data on Global Shipping Traffic Density from the World Bank, where I got a bit frustrated with QGIS’s gradient-remapping functionality, so decided to roll my own in order to allow it to be more recipe-based with configurable parameters and then allow remapping multiple images with exactly the same params more easily.

Below is an example of remapped Global Shipping Traffic Density around Europe, with the reprojected (in terms of map projection) absolute values from the source file data being remapped to a custom colour gradient output image:

Remapped image data (from World Bank Group) of shipping traffic

The second use-case was remapping 8-bit greyscale values in JPEG files taken from a thermal camera I’ve had for a while, to be able to remap them to more interesting and configurable colour gradients.

Original thermal camera image:

Thermal camera image

Remapped image:

Remapped termal image

Original thermal camera image:

Thermal camera image

Remapped image:

Remapped termal image

The thermal camera I have does support some colour gradient automatic remapping itself, however there are only 8 presets, and the result is baked into the captured JPEG image, so having the flexibility to capture in greyscale in the camera and then remap the images later is much more useful.

I do have some older interesting images I captured just after getting the camera years ago, where I was mostly capturing the images in pre-baked gradient format, which I would like to change the gradient of - so a possible feature improvement to my image-processing infrastructure might be to add support for “reverse-mapping” the colours in images from a gradient back to a “scalar” or linear value, so I could then apply custom colour gradients of my choosing.



Central New South Wales Trip

A week ago I returned from a week’s trip to central New South Wales in Australia, driving from Sydney down to (almost!) Batemans Bay, then up again to Newcastle and then back to Sydney.

The aim was to spend a bit more time exploring places I’d skipped / missed when I motored through quickly on my trips passing through NSW in 2018 and 2019 - which I did end up doing, however it turned out I’d forgotten I had actually visited some places before! Jervis Bay Territory in particular - although this time around it wasn’t overcast, so I was able to see it (and the white sand beaches) with the sun out.

The weather was excellent, and once again I got some photos I’m pleased with.

Lake Illawarra before sunset:

Sunset looking out over Lake Illawarra, NSW

Green Patch Beach, Jervis Bay:

Green Patch Beach, Jervis Bay, NSW

Mollymook Beach sunrise:

Sunrise at Mollymook Beach, NSW

Fitzroy Falls:

Fitzroy Falls, NSW



Pointcloud Processing Tooling

For some of the more recent LIDAR maps I’ve been producing over the past six months - see this post with initial experimentation, and the resulting full-sized maps I’ve rendered so far can be found here - the source data has been in Pointcloud form rather than as a ‘flat’ raster image format (GeoTIFF) of just the DSM height values, and so it requires different processing in order to clean it up, and then convert it into a ‘displacement’ image map so I can render it as a 3D geometry mesh.

The below rendering is of Sydney, and uses a conversion from Pointcloud source data (available from https://elevation.fsdf.org.au/ as .laz files):

Sydney DSM map LIDAR render

The data being in Pointcloud format has both advantages and disadvantages over more simple raster height images: one of the main advantages is that there’s often (depending on the density and distribution of the points) more data per 2D area measure of ground, and each point has separate positions and attributes, rather than just being an average height value for the pixel as it is in raster image format. This means that theoretically it’s easier to “clean up” the data and remove “bad” points without destroying surrounding data. Another advantage is that being fully-3D, it’s possible to view the points from any direction, which is very useful when visualising and when looking for outliers and other “bad” points.

The disadvantages are that the data is generally a lot heavier and takes up more memory when processing it - three 8-byte double/f64 values need to be stored for the XYZ co-ordinates - at least when read from LAS files, as well as additional metadata per point (although there are ways to losslessly compress the memory usage a bit) - in addition to needing different tools to process than with raster images. Newer QGIS versions do now support opening and viewing .LAS/.LAZ Pointclouds (both in 2D and 3D views), although on Linux I’ve found the 3D view quite unstable, and other than being able to select points to view the classification, there’s not much else you can do to process the points, other than some generic processing which uses the PDAL tooling under-the-hood. It also appears QGIS has to convert the .LAS/.LAZ formats to an intermediate format first, which slows down iteration time when using other processing tooling side-by-side.

PDAL is a library for translating and manipulating Pointcloud data (similar to GDAL for raster data / GeoTIFFs, which QGIS uses for a lot of raster and vector operations under-the-hood), and it has quite a few useful features including merging Pointclouds (a lot of the source DSM Pointcloud data is only available as tiles of areas, and so the data needs to be merged to render entire cities, either before converting to a raster displacement map or after), filtering points, rejecting outliers and converting to raster image heightfields.

I have however found its memory usage somewhat ‘excessive’ for some operations, in addition to being slow (despite the fact it’s written in C++). Because of this - and also to learn more about Pointclouds and the file formats - I’ve started to write my own basic Pointcloud processing utility application (in Rust - the las-rs crate allowed out-of-the-box reading and writing of the .LAS/.LAZ Pointcloud formats which was very useful to get started quickly), which despite not really doing anything especially ‘fancy’ for some of the more simple operations like merging .LAS/.LAZ files - it just does a naive loop over all input files, filtering the points within based on configured filters and then accumulating and saving them to the output file - uses a lot less memory than PDAL does, and is quite a bit faster, so I’ve been able to process larger chunks of data with my own tooling and with faster iteration time.

The one area I haven’t tackled yet and am still using PDAL for is conversion to output raster image (GeoTIFF) - which I then use as the displacement map in my renders - however I hope to implement this rasterisation functionality myself at some point.

I am on the lookout for better Pointcloud visualisation software (in particular on Linux - a lot of the commercial software seems to be Windows or Mac only). QGIS’ functionality is adequate but not great, and is fairly lacking in terms of selection, and other open source software I’ve found like CloudCompare seem a bit unstable (at least when compiling from source on Linux), and it’s not clear how well it’d scale to displaying hundreds of millions of points at once.

I have found displaz which is pretty good for displaying very large Pointclouds (it progressively draws them, and seems to store them efficiently in memory), however it has no support for selection or manipulation of points (by design), so I’m still looking for something which caters to that additional need: in particular the selecting of outlier points interactively and culling them.



Simple Photo Collage Generation

Last week I implemented support for generating very simple (grid only) collages from photos/images in my image processing infrastructure.

Example Photo Collage of Pedestrian crossing lights in Wellington

I had wanted to create some simple grid-based collages of some photos, and I was somewhat surprised to discover that neither Krita nor GIMP (free/open source image manipulation software) seem to provide any built-in functionality to generate this type of output, without manually setting up grids/guides and resizing and positioning each image separately - which while not difficult in theory - is somewhat onerous, especially when you want to generate multiple similar collages from different sets of input images in a procedural/repeatable way.

I did briefly look into (free) web-based solutions, however I wasn’t really happy with that avenue either, partly due to most web solutions having the same lack of procedural/recipe generation (i.e. being able to just change the input images and get the same type of result without re-doing things from scratch again), but also because many web solutions seemed more targeted at “artistic” collages with photos having arbitrary positions and rotations, rather than having grid presets, as well as the fact that many (although not all) of the web apps in question required some form of registration or sign-in.

So I ended up just quickly implementing basic support for this collage generation myself in my image processing infrastructure, which took less than two hours, and means I can now generate arbitrary grid collages from a ‘recipe’ source parameters file which configures the target output resolution, the row and column counts, input images list, inner and outer border widths and border colour, as well as the image resize sampling algorithm/filter to use (i.e. bilinear, bicubic, Lanczos, etc) for resizing the input images to fit into the collage grid.



Great-Circle Route Visualisation with three.js

Great-Circle Route Image

Over the past few years, as well as learning the Rust programming language, I’ve also been attempting to learn more about the JavaScript ecosystem for front-end web development as well, although the seemingly large number of different libraries/frameworks available has made that a somewhat daunting task in terms of starting point. Because of this, I had decided not to just learn generic and popular web frameworks that were the most popular ones at the time of learning, but to explicitly use ones that seemed to match well with particular use-cases I had in mind.

In this case, I’ve been writing a basic web app for visualising Great-Circle Routes (link to app here) between common airports. There are various existing ones on the web in some form or another, but many produce only flat 2D visualisations, and while there are one or two 3D ones showing the routes on a 3D globe, they weren’t exactly to my liking, and regardless, it provided me with an interesting use-case with which to gain some more experience with JavaScript and web-based real-time rendering.

I did at first think about sticking to core vanilla JavaScript and using WebGL directly with no use of additional frameworks/libraries, but given I wanted something working fairly quickly, I decided that using a library would be the more efficient solution, and a quick look at https://threejs.org/ and its capabilities re-enforced that decision.

In the end, it was pretty simple to come up with a basic working example, with a sphere mesh object with a diffuse earth texture on, with an orbit-able camera, with “route” line meshes representing the Great-Circle Route and “flat” 2D route in question being drawn based off spherical coordinates in 3D space from source latitude and longitude coordinates. The Great-Circle Route between two points was calculated by using the Haversine Formula.

The most “complicated” thing in the implementation ended up being the Airport text labels: while it was simple enough to use THREE.CSS2DObject to construct high-quality DOM text items that appear in the correct 2D screen position based off the camera/earth position of where the respective airports are on the sphere, occlusion/visibility handling isn’t done automatically with this setup, as the DOM elements are not actually part of the 3D scene, they’re layered on top. So in order to hide the label text of airports when they’re occluded by the Earth sphere geo (i.e. when they’re around the back of the Earth to the camera), I ended up having to send Raycaster queries to detect the visibility of the position of the airports from the camera, and adjust the visibility of the Airport label CSS2DObject objects based off the result of these queries within the main render loop. This was still quite easy to do however, and seems to work well enough, although doing the raycaster queries within the render loop is probably not the most optimal solution in terms of efficiency, so I might need to look into better ways of handling that.

There’s still room for improvement though: the thickness of the route lines should ideally scale with the zoom level in screen-space, and the fixed colour of the lines means it’s not always easy to see them depending on the Earth texture being used underneath them, so I might have to look into some kind of contrast/blending setup in the future in order to make the route lines easier to see.




Archive
Full Index

2024 (6)
2023 (7)
2022 (3)
2021 (5)
2020 (4)
2019 (7)
2017 (1)
2016 (2)
2015 (1)
2014 (9)
2013 (10)
2012 (7)


Tags List