I’ve recently started to explore Zig a bit and while I’ve been working my way through its official documentation, I was delighted to learn that arrays can easily be vectorized with the builtin @Vector
function!
From the docs:
const a = @Vector(4, i32){ 1, 2, 3, 4 };
const b = @Vector(4, i32){ 5, 6, 7, 8 };
const c = a + b; // c is now {6, 8, 10, 12}
In this case, vectorization allows us to perform element-wise addition without explicitly looping through the elements of a
and b
. This is not only a convenience, it’s also considerably faster than using a for
loop!
How does it achieve this speedup?
Vectorization is actually a pretty low-level technique that enables CPUs to apply an operation on multiple elements of an array (or vector) simultaneously, using specialized SIMD instructions. Most compilers nowadays can replace such “loop through sequences” constructs with vectorization during compilation. As a result, vectorization is not something that developers need to worry about too much in their daily programming.
However, the popularity of libraries and frameworks that allow manual vectorization, such as numpy
, shows that vectorization is a very useful tool for machine learning, particularly deep learning.
Now, I don’t mean to say that you should start doing machine learning in Zig instead of Python. Zig is a language that requires you to deal with (or, better: that allows you to care about) a lot of low level stuff whereas Python hides most of that complexity so that you can focus on concepts rather than implementation.
But I do nonetheless believe that this puts Zig in a position where it can be used as a “supporting language” for the same “heavy lifting” tasks that are currently dominated by old and clunky1 C code.
Ok, ok, I admit: old and clunky but also highly optimized and thoroughly tested :P↩︎