Basic Apple M1 Pro CPU Benchmarks

Last week I received a work-provided (on loan) new Apple MacBook Pro (14-inch, 2021) with the Apple M1 Pro 10-core processor, so I was very curious to see how it performed with some of my code, given the generally excellent reviews and reports I’d seen by people with it online.

I have my own MacBook Pro 15-inch 2015 (Intel i7) model which I’ll be comparing it against as a basic benchmark performance test, although it won’t be a completely fair test as the two machines are running different versions of MacOS and the clang compiler, and I didn’t want to upgrade my own machine’s MacOS version to a newer version (for various reasons). So the code being executed on the different machines will be generated by different versions of clang, but given the micro-architecture is totally different anyway, I think it’s still a meaningful test and at least will show the performance difference between the two laptops.

My old MacBook Pro 15-inch 2015 is a 2.5 GHz Intel Core i7, with four cores, eight threads, and 1600 Mhz DDR3 memory, running macOS 10.14.5 (Mojave), with Xcode 10.2.1 installed which has clang version 10.0.1.

The new MacBook Pro 14-inch 2021 is the M1 Pro 10-core version, with 8 performance cores and 2 efficiency cores, running macOS 12.3 (Monterey), with Xcode 13.4.1 installed which has clang version 13.1.6.

Tests:

I’ll be using two of my apps as benchmarks: my Mint interpreter language VM (originally based off Robert Nystrom’s excellent Crafting Interpreters Lox language tutorial) but with additional functionality and performance improvements, which I’ll use to benchmark two Mint scripts as single-threaded tests, and also my Imagine pathtracing renderer, which has native SSE intrinsics support for Intel and native Neon intrinsics support for ARM, which I’ll run in both single- and multi-threaded scenarios.

Both apps will be compiled with -march=native on the Intel side and -mcpu=native on the Apple M1 Pro ARM side, using the clang version on the machine in question, as well as optimisation level: -O3.

The two Mint script tests will be loop value calculation as Test 1:

var a = 1;
for (var i = 0; i < 100000000; i += 1)
{
    a = (i + i + 3 * 2 + i + 1 - 0.42) / a;
}
print a;

and a variation of Project Euler 21 to calculate the sum of all Amicable numbers under 15,000 as Test 2.

The Imagine rendering tests will render the below example scene of a basic maze with 12 spherical area lights and one hemisphere light, with four Next Event Estimation samples being taken each bounce to evaluate the lighting, and using Cone sampling to perfectly sample each spherical area light from each hitpoint, with 256 samples-per-pixel being done, using stratified sampling and a ray depth of 6 diffuse bounces. Depending on the setup of the tests (single-threaded, multi-threaded, etc), I’ll render the scene at resolutions of: 150 x 150, 300 x 300 and 600 x 600.

Example Maze Lighting scene

For all tests, I’ll run them on battery power and then mains power, in case that affects the clocking of the processors, and will also wait between test runs for the processor temperatures to be below 50 degC before running the next test, to try and reduce the impact of thermal throttling. However, for the longer-running multi-threaded Imagine rendering tests I’m fairly sure thermal throttling will start to take place on the Intel 2015 machine given how fast the fans start to spin in longer-running scenarios.

All tests will be run three times, and results below will show the mean average of those numbers.

Results

Single-threaded Mint VM interpreter:

Single-threaded Mint interpreter VM benchmarks, smaller values are better: Single-threaded Mint interpreter VM benchmarks

For these single-threaded tests, the M1 Pro has fairly modest (~11% faster in Test 1, ~16% faster in Test 2) improvements over the seven year old Intel i7, however as the Mint VM execution is often branch-prediction constrained within the main VM bytecode interpreter loop, I expect this doesn’t really allow the M1 Pro’s eight-wide design to really stretch its legs running these benchmarks, as there’s a limit to the amount of Instruction-Level Parallelism that’s achievable.

It’s interesting to see that there’s no apparent difference in performance between running on battery vs running connected to the mains: I was expecting to see a difference - even if only on the Intel i7 MBP - as I know my Thinkpad T480s running Linux shows significant performance differences between the two states, but Apple clearly don’t seem to be implementing that strategy.

Single-threaded Imagine rendering:

Single-threaded Imagine rendering benchmarks, smaller values are better: Single-threaded Imagine Render benchmarks

Things start to look more impressive for the M1 Pro in the single-threaded rendering tests, as here there are a lot more floating point calculations being done with a lot less branching, and there’s a fair amount of SIMD use (i.e. ray intersection / traversal code, which uses native SSE or Neon instructions for the respective platform architectures) allowing the M1 Pro to show what it can do.

For both the 150 x 150 resolution and 300 x 300 resolution renders, the M1 Pro machine completed the tasks in ~46% less time, so it’s close to twice as fast as the older Intel machine. Again, there was no apparent performance difference between running powered off the battery compared to running with the machines plugged into the mains power.

Multi-threaded Imagine rendering:

Multi-threaded Imagine rendering benchmarks, smaller values are better: Multi-threaded Imagine Render benchmarks

The multi-threaded rendering tests show that for the shorter-duration 300 x 300 resolution render, the M1 Pro was more than three times faster (render time was less than a 1/3 of the Intel MBP’s i7 render time), and for the longer-duration 600 x 600 resolution render this ratio was slightly better still - I suspect because of the fact the Apple M1 Pro has significantly better thermal performance than the Intel i7, so is not throttling as much - if at all - although watching the temps on the M1 Pro machine towards the end of the test shows it did reach 84 degC, so it’s possible throttling is happening a bit.

Conclusion

So for most code that’s likely not branch/dependency-bound, it would seem the almost one year-old M1 Pro being tested here is around three times faster than the seven year-old Intel i7 model (although the base M1 Pro model with two less performance cores will not be as fast), and for multi-threaded code that parallelises well it can be more than three times faster, with much better thermal performance (which means less fan noise) and battery life, which is exactly what you’d want from a laptop, and I think shows that the Apple M1 processors are something to be excited about.



Minimalist Faux 3D Relief Map Renders

Over the past year I’ve become somewhat interested in visualisations of maps and terrain representations, and in particular “artistic” renders of relief maps. I’ve tried making a couple of different types of renderings of maps in some form or another over the past few years with varying degrees of success, and over the past few months I’ve been progressively creating what I’m terming “Minimalist” relief maps, where a surface is displaced by a Digital Elevation Model terrain height map (often to an exaggerated extent compared to real-word scale) with fairly simple shading, but using light and shadows as a key element to provide a sense of the terrain topology and height in a stylistic way.

Minimalist relief map of Africa

These maps are a lot less manual-labour intensive to set up than the previous historical topographic ones I tried which required a lot of manual warping of the DEM images to match up with the historical map, so there’s a lot less effort (on my part anyway) to generate these more minimalistic style ones, and I still find them very nice to look at.

Minimalist relief map of Antarctica

I’ve experimented with various different types of shading, with the main two types I settled on being the diffuse colour mapped to a colour gradient, driven by:

  • the terrain height value (the Y coordinate height in worldspace)
  • the occlusion ratio in a hemisphere around the shading point

The latter of which I think I like best (and is what both renders above show) as it allows nice highlighting of the edges and gradients of steeper terrain, although it means the stronger colours are generally in darker more occluded areas which hides it a bit, and the renders are slower as well, as additional raytracing to calculate the surface point occlusion needs to be done during shading.



Aurora Australis (Southern Lights)

I’ve just got back from a fantastic week and a bit in the South Island, and was very lucky to have perfect weather for most of the trip, as well as getting a lucky view of the Aurora Australis (Southern Lights) one night.

Aurora Australis (Southern Lights)

I was in Queenstown at the time, and a phone app I have which gives notifications when Aurora might be visible at your location alerted me that it was likely visible, so I grabbed my camera and tripod to try and photograph it, starting during the later stages of dusk. There were two different “types” that were fairly faint but just visible to the naked eye: a green glow that really looked like light pollution - although there were no large towns in exactly that direction, and it morphed over time changing shape - and blue streaks, gently pulsing over time.

With a long exposure, they are much more clearly visible.

A very magical experience.

I also managed to get some excellent views of Aoraki / Mt Cook.

Aoraki / Mt Cook



Displaced Height Historical Map Renders

In a repeat of being inspired to attempt my own version of some nice-looking “artistic” maps that I noticed online last year with population density maps which I attempted to copy (fairly successfully), I recently noticed some historical maps that had been rendered with an exaggerated displaced height to the surface, giving the impression of terrain when combined with lighting and shadowing from a renderer.

So again, I was keen to try and create my own copies of these, and to see how difficult it was.

I discovered the existence of the David Rumsey Map Collection which has a very large collection of digital scans of historical maps of locations around the world, which is very interesting to browse through, and provided a good starting point for the visual “historical” texture for the maps.

Many of them are in very good condition, but some of the more older and more interesting ones are (understandably) somewhat aged in appearance and have tape marks / rips in them to varying degrees. While it would have been possible to do some artistic fix-ups / colour-correction first, it wasn’t something I particularly wanted to get into for my initial attempts (I always like to get to the Rendering part as soon as possible!), so I just picked a few interesting looking historical maps which had minor ageing marks on.

Obtaining terrain heightfield DEM (Digital Elevation Model) data is relatively easy from several sources, and I decided on using GMTED2010 data in the end.

The tricky part was fitting the DEM terrain data to the historical map image, which - depending on the map and its source, likely has an unknown projection, a border around the edge, and as I found out could also have out-of-date or incorrect terrain for some maps which were more than 100 years old. I ended up having to grid-warp the DEM terrain image to fit the historical map which was going to be the visible texture, which is do-able, but a pretty time-consuming manual process in order to do a reasonable job.

With my initial renders using a perspective camera projection from single angles (normally the bottom), I was able to take a few short-cuts with alignment, but for full top-down orthographic camera projections in the future (which will look better), I’ll need to do a better job.

I then rendered them in Imagine, with the historical map image being used as the diffuse surface texture, and the warped DEM image being used as the displacement map.

Iceland Map Rendering

Alaska Map Rendering



Interpreter Programming Language Benchmarks

I’ve sporadically but progressively been continuing to work on my Mint interpreter programming language/VM that was heavily based off Robert Nystrom’s excellent Crafting Interpreters Lox language tutorial, and have spent a fair bit of time trying to optimise the main VM bytecode despatch loop among other areas of the VM, as well as performing some benchmark comparisons with other similar bytecode VM interpreter languages like Lua, Python and Ruby to see where Mint stands performance-wise as well as to understand how fast the other VMs are, how they work and some of their performance characteristics for different things.

I managed to speed up the original HashMap implementation a bit by caching the capacity mod bit-mask and the capacity and table load ratio, so as to not recalculate them on-the-fly every lookup which wasn’t needed, and to only update them when the table resized, in addition to adding some compiler hints in order to force inline things and give hints on branch prediction likelihood which helped a bit more.

One of the main bottlenecks in raw execution speed is the main VM stack in terms of operations pushing and popping items onto and off it, so this is really the hot loop of the VM - at least when the Garbage Collector is not being exercised. On the bytecode opcode execution side of things, I added some new opcodes in order to do some things more efficiently, for example to operate directly on items within the stack in certain limited scenarios when constant values are used, without having to pop and push them back - for example this can be used by compound assignment operators like +=, -=, *=, etc, with constant values being used as the right-hand-side argument, and helped speed things up a bit.

For general for loops incrementing a value by 1 each time, the push/pop overhead was fairly substantial, and I added a dedicated OP_INC_LOCAL opcode to help with this, which can increment the top of the stack by one without popping the stack and then having to push the result after adding 1. The compiler part can detect when this can be used (e.g. in the for loop increment clause, with a constant value being used as the increment) and will use it when possible.

Lua - being a register-based interpreter VM - seems to have an advantage in the area of performance - especially in maths-heavy code - because it doesn’t need to do as many stack operations, and this is one of the reasons Lua is one of the faster bytecode interpreter languages.

The main dispatch loop can also be optimised by using “computed gotos” instead of just a case statement for each opcode within a switch statement: with switch / case statements, compilers are obliged (in C and C++ anyway) to add an upper bound value check, which slows down execution.

In terms of where things stand now performance-wise, I’m fairly happy: Mint is not the fastest interpreter language VM, but it is very competitive against Python 2, Python 3 and Ruby in most cases, and occasionally can win against Lua.

For performance comparisons, I’ve mostly been using basic problem solving exercises from Project Euler which are normally maths-heavy, meaning the benchmark test programs I’m using are slightly skewed benchmarks towards maths operations compared to more general operations (i.e. they don’t generally exercise the Garbage Collector infrastructure), but that’s the type of code execution I’m mostly interested in for the moment.

I’m going to compare Mint against four (I’ll compare Python 2 and Python 3 separately) other interpreter language VMs:

  • Lua 5.3
  • Python 2.7
  • Python 3.8
  • Ruby 2.7

These are the package versions available with the Linux distro I’m using (Linux Mint) on my laptop, and I’m going to compare them against Mint (VM) built with GCC 9.3, as that’s the system compiler version that I assume the packages for the other interpreter language VMs were compiled with, so seems the fairest compiler to use to compile with. Mint will be built with optimisation level -O3. Normally I like also comparing different compiler versions and optimisation levels, but I’m not going to do that this time as my aim is to compare against other interpreter VMs, and that would increase the number of tests I’d have to do quite a lot, and I’d rather not get into building each one from source, but that would be the fairest way of doing a fully-comprehensive comparison if I had the time.

The tests are being done on a laptop with an Intel i5-8350U processor, with the power plugged in. I’ll make sure the CPU temp is normal before running each test to prevent thermal throttling, and will run five identical tests of each benchmark with each interpreter language, using the mean average as the final result.

I will run the following seven tests:

TestDescription
Test 1Recursive Fibonacci to 35 levels.
Test 2The Leibniz algorithm to calculate an approximate value of PI, doing 4,000,000 iterations.
Test 3Loop value accumulation and mathematical operations: 100,000,000 loops of adding the result of temporary sums and multiplications of the loop control variable. See code examples below.
Test 4Looped sum of multiples of 3 or 5, up to a value of 50,000,000. A modification of Project Euler exercise 1.
Test 5A sum of all amicable numbers below 15,000. A modification of Project Euler exercise 21.
Test 6A count of the number of rectangles in a grid of size (100, 150). A modification of Project Euler exercise 85.
Test 7A calculation of the sum of all primes up to 10,000,000, using the Sieve of Eratosthenes algorithm.

As quick partial examples, here is the Mint script code for Test 3:

var a = 1;
for (var i = 0; i < 100000000; i += 1)
{
    a = (i + i + 3 * 2 + i + 1 - 0.42) / a;
}
print a;

here is the Lua version of Test 3:

local a = 1
for i = 0,100000000-1
do
    a = (i + i + 3 * 2 + i + 1 - 0.42) / a;
end
print (a)

here is the Python 3 version:

a = 1
for i in range(0, 100000000):
    a = (i + i + 3 * 2 + i + 1 - 0.42) / a
print(a)

and here is the Ruby version:

a = 1
0.upto 100000000-1 do |i|
    a = (i + i + 3 * 2 + i + 1 - 0.42) / a
end
print a
print "\n"

These are the results, showing average execution time for each test for all interpreter language VMs (smaller values are better):

Interpreter programming language performance benchmarks chart

Lua has a very good showing (largely I think because it’s a register-based VM instead of a stack-based one), and I’d expect the non-JIT version of LuaJIT which is heavily-optimised on x86-x64 would be even faster.

Mint just wins in Tests 6 and 7, and is second fastest in the remaining tests, which isn’t bad as far as I’m concerned. There’s more room for optimisation - especially with regards to the Garbage Collector which these benchmark tests purposefully didn’t exercise, but also in other areas, i.e. Constant Folding is on my list to look at implementing - but I’m happy enough with the improvements and speed for the moment.



Crafting Interpreters

I’ve spent a fair amount of time over the last three months following through the second part of Robert Nystrom’s fantastic Crafting Interpreters free online book, consisting of a complete step-by-step guide to how to “craft” a complete and fully-functional interpreter language virtual machine.

I made an attempt to start two years ago on the tree-walk version, but didn’t have the motivation at the time to complete it (I think the fact I knew the bytecode VM version would be better and more useful/interesting subconsciously made me not bother as much given the bytecode VM version’s chapters were still being written), but I realised just before Christmas that the book had been complete for a few months now, so made a bit of an effort to find time to complete it by writing my version based off the book, as I had a use-case for scripting language embedding in some of my apps that I thought having my own language/VM for could be quite useful/interesting.

I’ve enjoyed the process of following along, learning and writing my own version immensely, and I now have a version - called “Mint” - which is heavily based off the book, but with some subtle language syntax changes and written in C++ instead of C (not sure yet whether that was the best idea in the end, although I hope having a namespace might be useful for embedding in other codebases), and has several additional features over the stock book version of ‘Lox’, for example:

  • Built-in dictionary/map variable type support for key/value lookups and storage
  • Support for compound assignment operators (+=, *=, etc)
  • Array/list slicing / assignment support
  • break and continue loop-control syntax support
  • Support for multi-line comments (C/C++-style) and both “#” and “//” single-line comments
  • Ternary ? operator syntax support.

Example code listings, demonstrating fizzbuzz:

// fizzbuzz

for (var i = 1; i <= 100; i += 1)
{
	if (i % 15 == 0)
		print "fizzbuzz";
	else if (i % 3 == 0)
		print "fizz";
	else if (i % 5 == 0)
		print "buzz";
	else
		print i;
}

Calculating the approximate value of PI:

# calculate PI with the Leibniz algorithm...

def calculatePI(iterations)
{
	var x = 1.0;
	var pi = 1.0;
	
	for (var i = 2; i < iterations + 2; i += 1)
	{
		x *= -1.0;
		pi += x / (2.0 * i - 1.0);
	}
	
	pi *= 4.0;
	return pi;
}

var piVal = calculatePI(40000000);
print "PI: " + toStr(piVal);

producing:

PI: 3.14159266192262

and a fully-functional Mint Class, which can convert Roman numeral strings to numbers and vice versa, downloadable here

I’m going to continue developing it in the background, in order to attempt to integrate it into several of my applications, and also try and optimise its performance a bit and see how it compares to other interpreter languages.



Sunset Photo

Sunset Photo

No matter how many times I see stunning sunsets, I never get bored of them. Similarly with photographing them and looking at the photos afterwards: even though they can to some extent technically be repetitive when looking through photos of them later, there’s just something I find very magical and awe-inspiring about them.




Archive
Full Index

2024 (6)
2023 (7)
2022 (3)
2021 (5)
2020 (4)
2019 (7)
2017 (1)
2016 (2)
2015 (1)
2014 (9)
2013 (10)
2012 (7)


Tags List