Pixel manipulation using Rust, WebAssembly, Node.js and Webpack.

moi, convoluting

Convolution is the process of transforming an image by multiplying each of its pixels by a kernel. Depending on the kernel used, different effects such as blurring, sharpening, embossing, etc, can be achieved.

We will come back to the implementation of convolution later on, but first let’s create a project where we can play around with Rust and WebAssembly, and easily embed a generated wasm file in a web app.

The Setup

  • cargo install cargo-generate
  • cargo new --lib convoluted-mirror

Expected output:

convoluted-mirror
├── Cargo.toml
└── src
└── lib.rs

cd into convoluted-mirror, start a Node.js project and install Webpack.

  • cd convoluted-mirror
  • npx gitignore node
  • yarn init
  • yarn add webpack webpack-cli
  • yarn add -D clean-webpack-plugin html-webpack-plugin webpack-dev-server file-loader

Create index.js and webpack.config.js at root level:

index.js

const WIDTH = 720.0;
const HEIGHT = 480.0;
const video = document.createElement("video");
document.body.appendChild(video);
// setup and play video
(async () => {
const stream = await navigator.mediaDevices.getUserMedia({
audio: false,
video: {
facingMode: "user",
width: WIDTH,
height: HEIGHT,
},
});
video.srcObject = stream;
await video.play();
})();

webpack.config.js

const path = require("path");
const HtmlWebpackPlugin = require("html-webpack-plugin");
const { CleanWebpackPlugin } = require("clean-webpack-plugin");
module.exports = {
mode: "none",
entry: {
index: "./index.js",
},
devServer: {
contentBase: path.join(__dirname, "dist"),
},
module: {
rules: [
{
test: /\.wasm$/,
include: /pkg/,
loader: "file-loader",
type: "javascript/auto",
sideEffects: true,
options: {
name: "[name].[ext]",
},
},
],
},
plugins: [
new CleanWebpackPlugin(),
new HtmlWebpackPlugin({
title: "convoluted mirror",
}),
],
output: {
filename: "[name].js",
path: path.resolve(__dirname, "dist"),
},
experiments: {
asyncWebAssembly: true,
},
};

In package.json (at root level) add the following scripts:

"scripts": {
"build": "webpack",
"wasm": "wasm-pack build --target web",
"start": "webpack serve --hot"
},

And add these changes to Cargo.toml for yarn wasm to work:

[package]
name = "convoluted-mirror"
version = "0.1.0"
authors = ["roberto.torres"]
edition = "2018"
[lib]
crate-type = ["cdylib", "rlib"]
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
wasm-bindgen = "0.2.67"

yarn wasm should create a pkg folder withconvoluted_mirror_bg.wasm, and a javascript file, convoluted_mirror.js, that will work as an interface to interact with the wasm file:

pkg
├── convoluted_mirror.d.ts
├── convoluted_mirror.js
├── convoluted_mirror_bg.wasm
├── convoluted_mirror_bg.wasm.d.ts
└── package.json

yarn build should create the dist folder:

dist
├── index.html
└── index.js

and yarn start should start a page displaying video from your webcam.

Let’s change lib.rs in the src folder. Remove all the Cargo autogenerated code and replace it with:

use wasm_bindgen::prelude::*;
use std::fmt;
#[wasm_bindgen]
pub struct Mirror {
n: i32
}
#[wasm_bindgen]
impl Mirror {
#[wasm_bindgen(constructor)]
pub fn new(_n: i32) -> Mirror {
Mirror {
n: _n
}
}
#[wasm_bindgen(method)]
pub fn talk(&self) -> String {
return self.to_string();
}
}
impl fmt::Display for Mirror {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f,"Mirroring value from Rust: {}", &self.n)
}
}

We want to use the generated wasm file in our web app so we’ll have to include it and instantiate it from index.js

import init, * as wasm from "./pkg/convoluted_mirror.js";
import mirrorwasm from "./pkg/convoluted_mirror_bg.wasm";
const WIDTH = 720.0;
const HEIGHT = 480.0;
const video = document.createElement("video");
document.body.appendChild(video);
// setup and play video
(async () => {
await init(mirrorwasm);
const mirror = new wasm.Mirror(777);
console.log(mirror.talk());
const stream = await navigator.mediaDevices.getUserMedia({
audio: false,
video: {
facingMode: "user",
width: WIDTH,
height: HEIGHT,
},
});
video.srcObject = stream;
await video.play();
})();

The first line at the top imports the init function from the javascript autogenerated interface created by cargo and the wasm-bindgen library, and declares that the module returned by this function will be called wasm. This init function needs the wasm binary as input, therefore the second import.

After await init(mirrorwasm), the wasm module will be exposing the Mirror class written in Rust to javascript, allowing objects to be instantiated and their methods called:

const mirror = new wasm.Mirror(777);
console.log(mirror.talk());
// Mirroring value from Rust: 777

Run yarn build and see how Webpack is now including convoluted_mirror_bg.wasm in the dist folder:

dist
├── convoluted_mirror_bg.wasm
├── index.html
└── index.js

Verify that the code in pkg/convoluted_mirror.js has been merged into dist/index.js.

If everything ran correctly so far, you should have ended with something like this. Sweet!

Run yarn && yarn wasm && yarn start and the app should be displaying your webcam's video on a browser.

To tidy things up a little bit further, add the following folder at root level and make the required changes to webpack.config.js and package.json to use app/html/index.html as the main template.

app
├── css
│ └── styles.css
├── html
│ └── index.html

The Rust Mirror

import init, * as wasm from "./pkg/convoluted_mirror.js";
import mirrorwasm from "./pkg/convoluted_mirror_bg.wasm";
const WIDTH = 720.0;
const HEIGHT = 480.0;
let mirrorCanvas = document.getElementById("mirrorCanvas");
let mirrorConvolute = document.getElementById("mirrorConvolute");
// setup and play video
(async () => {
await init(mirrorwasm);
const stream = await navigator.mediaDevices.getUserMedia({
audio: false,
video: {
facingMode: "user",
width: WIDTH,
height: HEIGHT,
},
});
video.srcObject = stream;
await video.play();
const mirror = new wasm.Mirror(mirrorCanvas.getContext("2d"), WIDTH, HEIGHT); let i = 0;
async function animate() {
// draw frame from video stream to mirrorCanvas
mirrorCanvas.getContext("2d").drawImage(video, 0, 0);
// draw convoluted reflection on the mirrorConvolute canvas
mirror.convolute(mirrorConvolute.getContext("2d"));
requestAnimationFrame(animate);
}
requestAnimationFrame(animate);
})();

All the image processing work will be now handled by the Rust code. There is no need to change index.js any further.

Add a class called Frame that will store the pixel array and implement a draw and a convolute method. Create a new file frame.rs in the src folder

src
├── frame.rs
└── lib.rs

frame.rs

use wasm_bindgen::prelude::*;
use wasm_bindgen::Clamped;
use web_sys::{CanvasRenderingContext2d, ImageData};
#[derive(Clone)]
pub struct Frame {
pixels: Vec<u8>,
width: u32,
height: u32,
}
impl Frame {
pub fn new(imgdata: ImageData) -> Self {
Frame {
pixels: imgdata.data().to_vec(),
width: imgdata.width(),
height: imgdata.height(),
}
}
pub fn convolute(&mut self) {
self.pixels = self
.pixels.iter()
.enumerate()
.map(|(i, x)| if i % 4 == 0 { *x } else { 255 })
.collect::<Vec<u8>>();
}
pub fn draw(&self, ctx: CanvasRenderingContext2d) -> Result<(), JsValue>
{
let data = ImageData::new_with_u8_clamped_array_and_sh(
Clamped(&self.pixels),
self.width,
self.height,
)?;
ctx.clear_rect(0.0, 0.0, self.width as f64, self.height as f64);
ctx.put_image_data(&data, 0.0, 0.0)
}
}

Let’s change lib.rs, include frame.rs and add the convolute method that will be called by index.js

use wasm_bindgen::prelude::*;
use web_sys::CanvasRenderingContext2d;
pub mod frame;
pub use self::frame::Frame;
#[wasm_bindgen]
pub struct Mirror {
context: CanvasRenderingContext2d,
width: u32,
height: u32,
}
#[wasm_bindgen]
impl Mirror {
#[wasm_bindgen(constructor)]
pub fn new(canvasctx: CanvasRenderingContext2d, w: u32, h: u32) -> Mirror {
Mirror {
context: canvasctx,
width: w,
height: h,
}
}
#[wasm_bindgen(method)]
pub fn convolute(&mut self, ctxt: CanvasRenderingContext2d) -> Result<(), JsValue>
{
// gets the imageData from self.context
let imgdata = self
.context
.get_image_data(0.0, 0.0, self.width as f64, self.height as f64)
.unwrap();
// and draws it on ctxt using frame.convolute and frame.draw
let mut frm = frame::Frame::new(imgdata);
frm.convolute()
frm.draw(ctxt)
}
}

Frame has a property pixels: Vec<u8>, a Vector of unsigned integers of 8 bits. The image captured from the canvas is an RGBA image that will be stored into this single long unidimensional array of u8 elements. The first element of the array is the red component of the first pixel (with values between [0..255]), the second the green, the third the blue and the 4th the alpha.

The frame.convolute method will loop through the pixels u8 array and turn each element’s value to 255 unless the element’s index is a multiple of 4. Given that this is a 0 based index array, the method will turn all the values to 255 unless they are red (i%4 == 0). The pixels formula will be {r, 255, 255, 255}, white when r = 255, cyan when r = 0, and all the shades of cyan in between.

Run yarn wasm && yarn start and you should get a cyan coloured video.

Convolution

For each pixel with 8 neighbours

  • Create a 3x3 matrix with the surrounding neighbours.
  • Multiply the matrix by the kernel. In Rust:
let mut p = 0;
for i in 0..9 { p += kernel[i] * mat[i] }
  • Save the result as the convoluted value of the pixel.

To make things simpler, let’s assume a greyscale image that has only a u8 number per pixel (Grey) instead of four (RGBA). In the example below, the first pixel with 8 neighbours has a value of 220 and is highlighted in red.

The output value in this case, after multiplying the kernel by the pixel’s matrix, is 14 as highlighted in red on the output image. The pixels at the border on the output image could be turned to 0 or could just keep their current value. The convolution operation won’t be defined for these pixels in any case.

For the next pixel (x: 1, y: 2), the convoluted value is 180.

After applying these steps to all the pixels that satisfy the condition of having 8 neighbours, the output grid will be the convoluted version of the original image.

The same principle could be applied to each colour channel of an RGBA image. The convolute function in that case looks like this:

pub fn convolute(&mut self) {
let w = self.width as usize;
let h = self.height as usize;
let mut convpixels = vec![0; w * h * 4];
let kernel = vec![1, -2, 1, -2, 4, -2, 1, -2, 1];
for y in 0..h {
for x in 0..w {
let red = 4 * (y * w + x);
let green = red + 1;
let blue = green + 1;
let alpha = blue + 1;
if x == 0 || x == w - 1 || y == 0 || y == h - 1 {
convpixels[red] = self.pixels[red];
convpixels[green] = self.pixels[green];
convpixels[blue] = self.pixels[blue];
convpixels[alpha] = self.pixels[alpha];
} else {
let mut row1 = 4 * ((y - 1) * w + (x - 1));
let mut row2 = row1 + 4 * w;
let mut row3 = row2 + 4 * w;
let m_r = vec![
self.pixels[row1],
self.pixels[row1 + 4],
self.pixels[row1 + 8],
self.pixels[row2],
self.pixels[row2 + 4],
self.pixels[row2 + 8],
self.pixels[row3],
self.pixels[row3 + 4],
self.pixels[row3 + 8],
];
row1 += 1;
row2 += 1;
row3 += 1;
let m_g = vec![
self.pixels[row1],
self.pixels[row1 + 4],
self.pixels[row1 + 8],
self.pixels[row2],
self.pixels[row2 + 4],
self.pixels[row2 + 8],
self.pixels[row3],
self.pixels[row3 + 4],
self.pixels[row3 + 8],
];
row1 += 1;
row2 += 1;
row3 += 1;
let m_b = vec![
self.pixels[row1],
self.pixels[row1 + 4],
self.pixels[row1 + 8],
self.pixels[row2],
self.pixels[row2 + 4],
self.pixels[row2 + 8],
self.pixels[row3],
self.pixels[row3 + 4],
self.pixels[row3 + 8],
];
let mut pr = 0;
let mut pg = 0;
let mut pb = 0;
for i in 0..9 {
let i = i as usize;
pr += (kernel[i] as i32) * (m_r[i] as i32);
pg += (kernel[i] as i32) * (m_g[i] as i32);
pb += (kernel[i] as i32) * (m_b[i] as i32);
}
// warning: truncation when converting to u8
convpixels[red] = pr.abs() as u8;
convpixels[green] = pg.abs() as u8;
convpixels[blue] = pb.abs() as u8;
convpixels[alpha] = self.pixels[alpha];
}
}
}
self.pixels = convpixels;
}

Let’s apply colour convolution not to a single still frame but to the result of subtracting the current still frame of a video stream minus the previous one.. 😱

https://convoluted-mirror.web.app/

If you made it till here, thank you for reading!

Front-end developer. Love graphics and animations.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store