Using GPU.js in Angular

Compute matrix calculation using GPU in browser and Angular

Balram Chavan
3 min readFeb 18, 2021

--

In a typical client-server architecture, client is supposed to do a little processing of inputs and more of user interactions. A web client should capture user inputs and data and send it to a server where heavy computations will take place and result will be sent back to a client for presentation. This is/was the orthodox way of web application development. But with advancement in computer hardwares, browsers capabilities, power of JavaScript libraries, introduction of web workers, web assemblies (WASM) etc.; things are not the same, are they? Nowadays, a web client can do lots of pure computations in memory. The few examples are big data visualisation libraries like D3.js, games development using WebGL, applying math operations on big data using web workers at client side etc.

Nowadays, almost all desktops and laptops have a built-in General Purpose Graphics Processor Unit (GPGPU). These are powerful chips used for heavy computations which is generally required in graphics heavy games. But over the time period, programmers have found a way to apply GPGPU for other problems as well. On the server side, there are many toolkits and programming languages available for writing GPGPU code like CUDA but what if we can use client’s GPGPU to do our JavaScript heavy task? Interesting thought, isn’t it?

GPU.js

Recently, I came across a library called “GPU.js”, and got impressed by it. This library abstract GPU constructs very well and allows developers to write GPU code in traditional JavaScript model. Of course, we have to follow new jargons and coding styles, like defining kernels, breaking loops etc. but if you have worked with GPU programming in past then this shouldn’t be a new thing for you.

As usual, I have built a demo web application for matrix multiplication using GPU.js and Angular to see how it pans out. You can checkout the live demo here:

CPU matrix multiplication

Here is the CPU multiplication code which runs three nested for loops to calculate product of two matrices.

GPU matrix multiplication

Here is the code for GPU version of matrix multiplication. In the gist below, on line number 3, we define a kernel (a function to be called for input data) and hold its reference. On line number 13, we are passing two matrices and matrix size and it returns a result matrix. If you compare CPU vs GPU code, you will find we don’t have three nested for loops here and some weird variables have been used like this.thread.x and this.thread.y. This is because in GPU programming, we follow Single Instruction Multiple Data (SIMD) approach where each input dataset get their own thread to run same piece of code. And these threads get’s their own variables ( thread.x thread.y and thread.z (if 3D matrix)) to distinguish current input data. You can read more about it in GPU.js GitHub documentations.

Applications

Writing GPU code is not that straight forward; you can’t simply convert your CPU code GPU and not all problem statements will fit GPU solution space. Having said that, we can use GPU code for in-memory calculation in browser for big data visualisation, parsing and validating big text files before sending to server, and so on. This opens up a new way of thinking and building powerful web application and reduce load on servers.

I am excited and looking forward to see how in future web developers will use client’s GPU functionalities to change the web development the way we know it!

Source Code

You can find the source code in GitHub repository.

Cheers!

--

--