Rock, Paper, Scissors

12 Aug 2022 · By Joshua O'Sullivan

A few weeks ago I remembered an old blog post where someone made their computer play rock paper scissors against itself zillions of times to generate cool patterns just like in this video:

It’s a pretty relaxing pattern to watch, kind of like a lava lamp. I could watch it for hours. That YouTube video isn’t long enough though, and I wanted to share it with my friends easily. I needed a website, so I made one.

What you just watched is called a cellular automaton. Want to know how I made it? Read on! There’s a lot to cover here, so unless you’re already familiar with cellular automata, OpenGL and p5.js I wouldn’t suggest trying to read everything in one sitting.

What’s A Cellular Automaton?

In a typical cellular automaton you have a lattice of cells that each hold a state. For example, a digital image is a lattice of square cells (commonly known as pixels) and each cell has a colour which we can refer to as its state. We can point to a pixel in an image and say “red is the state of that cell”.

A diagram of an upright square lattice with a checkerboard pattern of red and cream coloured cells, with one of the red cells annotated with the quote 'Red is the state of that cell'

The next thing we need to know about cellular automata is what a “generation” is. A generation is all of the cells in our lattice at the state they were at a particular point in time. You can think of generations as frames in a video - in each generation, we have all the same cells arranged in the same lattice, but some of them will have different states than they had in the previous generation.

Two copies of an upright square lattice with a red checkerboard pattern, where the in the first copy titled 'Generation 1' one of the cells that should be red is cream, and in the second copy labelled 'Generation 2' the erroneously cream coloured cell has now been coloured red

Now for the interesting bit, how do we decide what the state of each cell should be in each generation? In cellular automata, we create rules that tell us the answer to this question based upon the state of the cell and its neighbours in the previous generation.

An example set of rules might be:

  1. If a cell is red and the cell below it is cream, it should become cream.
  2. If a cell is cream and the cell above it is red, it should become red.

You might be able to visualise this as all the cream cells passing their state upwards, while the red cells pass their state downwards. Here’s what that looks like on a square lattice of 64x64, initialised with cells that are randomly red or cream:

With a few simple decisions, we can create automata that behave in a huge variety of ways - what size and type of lattice should we use; hexagons arranged in a honeycomb pattern perhaps? What possible states should we have? Which of the neighbouring cells should we consider in our rules? How should we use the states of the neighbours to pick a new state? What initial states should we use for all our cells?

There’s also lots of creative ways to play with this - what if we applied different rules at different times? What if we had continuous states instead of discrete ones? What if changing the state of a cell made a noise?

What Are The Rules Of The Rock, Paper, Scissors Cellular Automaton?

In my rock, paper, scissors cellular automaton each cell can be in one of three states: rock, paper or scissors. I use three different colours to indicate the state of a cell: red for rock, green for paper and blue for scissors. The initial state of each cell is randomly selected to be either rock, paper or scissors; and I arrange my cells into a square lattice.

The rule I use to determine the next state of a cell is that when a cell plays rock, paper, scissors against all its neighbours (including the diagonals, the technical term is the Moore Neighbourhood) if it loses against at least three of them then it becomes the state that it lost to. Otherwise, it stays the same.

For example, if the state of a cell was scissors, and four of its neighbours’ states were rock, then it would become rock. However, if only two of them were rock, and four were paper, it would remain as scissors.

An illustration of a cell whose state is 'Scissors' surrounded by four 'Rock' neighbours, two 'Scissors' neighbours and two 'Paper' neighbours, with an arrow pointing to a second illustration where state of the cell has become 'Rock'.

How Can We Run Cellular Automata In A Web Browser?

We could write some JavaScript to run a cellular automata simulation in the browser and paint it to a little canvas. Problem is, it won’t run fast enough if we want to have lots of cells.

Instead, we’re going to take advantage of the fact that applying the rule to each cell in our cellular automaton is an entirely independent process for each cell - I could give a hundred people a copy of my lattice, then allocate sets of cells to each person and ask them to figure out what those cells’ next state should be. When everyone’s done, we could put all the pieces back together to construct the next generation of the lattice.

In other words, what we want to do is a paralleliseable process - it can be broken down into many subtasks that can all be completed entirely independently.

A diagram of a large task represented as four red boxes - joined together to create a rectangle, being broken down into four smaller subtasks - by separating the boxes, which are then repainted yellow - to 'complete' them, then constituted back into a whole final result - by joining the boxes back together again to create one large yellow rectangle

GPUs are great at dealing with these kinds of problems, since many graphics problems can be broken down like this. But how do we program things for our GPU? And how do we do that from a web browser? The answer is WebGL!

WebGL is a derivative of OpenGL, which is an API we can use for asking our GPU to do all sorts of things.

When we ask our GPU to render a 3D environment, by default there is nothing in that environment - it is just an empty space. In order for anything to be drawn, we need to give it some triangles. We’re going to use two triangles to make a rectangle, our 2D canvas in this otherwise empty void.

Why do we have to give our GPU two triangles to make a rectangle? Because modern GPUs only know how to draw triangles, because they’re the most primitive polygon - this makes them the easiest polygon to work with, and we can construct any other polygon out of them when we need to.

We’re then going to colour those triangles in using two little programs called shaders, which will be written in a language called GLSL (OpenGL Shader Language). We need to write two types of shader: a vertex shader and a fragment shader.

What Do We Need A Vertex Shader For?

A vertex shader[1] is a little GLSL program which is run for every vertex that we give to our GPU. Each point on a triangle where two of its edges meet are a vertex, so each triangle has three vertexes:

A diagram of a triangle with labelled vertexes

A vertex shader takes the coordinates of the vertices we specify, and maps them to normalized device coordinates, which range from -1.0 to 1.0 in the X, Y and Z dimensions and describe where on the screen the triangle should actually be drawn. This puts the XY coordinate (0,0) in the center of the screen (there is also a Z coordinate for depth, but since we’re drawing in 2D, this will always be 0).

We can use a vertex shader to do things like applying a perspective, so we can move around inside of our environment by changing where the triangles are painted on our screen. We’re not going to do any of that, though, because we just need a rectangle to fill our screen.

To make it easier to work with our shaders, we really want the origin (0,0) to be in the top left corner of the screen - to avoid dealing with negative coordinates. Our vertex shader ultimately needs to return the top left corner as (-1,-1), so to transform from our coordinates into normalised device coordinates, we need to multiply by 2 and subtract by 1.

Our vertex shader will return the normalized device coordinate by assigning it to gl_Position at the end of our main function. It’s also going to pass through a value called vTexCoord which we’ll use in our fragment shader later. Don’t worry about this for now!

attribute vec3 aPosition;
attribute vec2 aTexCoord;

varying vec2 vTexCoord;

void main() {
vTexCoord = aTexCoord;

vec4 positionVec4 = vec4(aPosition, 1.0);
positionVec4.xy = positionVec4.xy * 2.0 - 1.0;

gl_Position = positionVec4;
}

What Do We Need A Fragment Shader For?

When we give our GPU a triangle to render, it uses the vertex shader to determine where on the screen the triangle should be painted. Then, for every pixel on the screen that intersects the triangle, “fragments” are generated. For each fragment, the fragment shader is run to determine what colour it should be painted.

A diagram of a triangle rasterised onto a screen, with one of the pixels annotated as 'A Fragment'

It’s important to not get confused between fragments and our automata cells; each cell will be made up of multiple fragments, since we want each cell of our automata to be bigger than one pixel.

We need to store the state of each cell in order to apply the rules in the next generation. In a normal program, you might use a 2-dimensional array for this, but on a GPU we can do this much more efficiently! We’ll store the current state of the automaton in an image called a backBuffer, where each pixel’s colour represents the state of a cell. Then in the next generation, to look up a state, we just sample what colour that pixel is.

At the start of our shader we take care of a few chores:

  1. We specify what precision should be used for our floating point values, as fragment shaders don’t have a default. Don’t worry if you don’t know what this means - it’s not that important. You can read more about it here.
  2. We declare our varying vTexCoord value that is passed to us by our vertex shader. This will hold the UV coordinate of our fragment, which will be on the scale of 0-1. These are called “varying” because they vary across fragments.
  3. We declare two uniforms: backBuffer which contains the previous generation of our cellular automaton as a 2D image which we can sample from; and canvasSize which is the size of our canvas in pixels, which we can use to determine how big each pixel is in UV coordinate space by taking the reciprocal of it. These are called “uniforms” because they are the same (uniform) across all of the fragments.
precision mediump float;

varying vec2 vTexCoord;

uniform sampler2D backBuffer;
uniform vec2 canvasSize;

We then define a function that’s going to take a UV coordinate of the fragment and return the coordinate of the centre of the cell it’s in. This is useful as it allows us to have lattices that differ in size from the actual canvas we’re drawing on. This function also flips the Y component as UV coordinates have the origin {x:0,y:0} in the bottom left by convention; we’re sampling from an image where the origin {x:0,y:0} is in the top left by convention.

// Converts a UV coordinate to the center position of the cell it belongs to
vec2 uvToCellCoord(vec2 uv) {
vec2 cellCoord = (floor(uv * canvasSize) + ceil(uv * canvasSize)) / 2.0 / canvasSize;
cellCoord.y = 1.0 - cellCoord.y;
return cellCoord;
}

Next, we define a function which I’ve called getNeighbour, which returns the RGBA value (as a vec4) of the neighbour of a given cell at the given offset. For example, getNeighbour(vec2(0.1,0.1), vec2(-1,0)) would return the RGBA value of the cell to the left of the cell at (0.1,0.1).

We use the canvasSize uniform in this function to convert the offset argument from cell to UV coordinates, and sample from the backBuffer uniform by using the texture2D method:

// Returns the value of a neighbour of the cell at cellCoord given an offset in cells
vec4 getNeighbour(vec2 cellCoord, vec2 offset) {
vec2 pixSize = 1.0 / canvasSize;
return texture2D(backBuffer, (cellCoord + (offset * pixSize)));
}

Now we have these useful uvToCellCoord and getNeighbour functions defined, we can write our main method which is fairly straightforward - we need to do three things:

  1. Get the value of the current cell.
  2. Get the most common state of all the cells neighbours, including the diagonals.
  3. Apply our rock, paper, scissors rule and return the value by making an assignment to gl_FragColor.

Because we’re storing our three states as red, green, and blue, in an RGB image, to work out if any of our neighbours appear more than three times, we can add their RGB values together. If five neighbours are red, and one is blue, the sum will be (5,0,1).

void main() {
vec2 cellCoord = uvToCellCoord(vTexCoord);

vec4 currentVal = getNeighbour(cellCoord, vec2(0.0, 0.0));

vec4 neighbourhoodSum = getNeighbour(cellCoord, vec2(-1.0, -1.0))
+ getNeighbour(cellCoord, vec2(-1.0, 0.0))
+ getNeighbour(cellCoord, vec2(-1.0, 1.0))
+ getNeighbour(cellCoord, vec2( 0.0, -1.0))
+ getNeighbour(cellCoord, vec2( 0.0, 1.0))
+ getNeighbour(cellCoord, vec2( 1.0, -1.0))
+ getNeighbour(cellCoord, vec2( 1.0, 0.0))
+ getNeighbour(cellCoord, vec2( 1.0, 1.0));

if (currentVal.r > 0.5 && neighbourhoodSum.g >= 3.0) {
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
} else if (currentVal.g > 0.5 && neighbourhoodSum.b >= 3.0) {
gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
} else if (currentVal.b > 0.5 && neighbourhoodSum.r >= 3.0) {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
} else if (currentVal.r > 0.5) {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
} else if (currentVal.g > 0.5) {
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
} else if (currentVal.b > 0.5) {
gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
}
}

Tying It All Together In p5.js

We could use the WebGL API on its own, but to make things a little easier I’m going to use p5.js. What’s p5.js?

p5.js is a JavaScript library for creative coding, with a focus on making coding accessible and inclusive for artists, designers, educators, beginners, and anyone else! p5.js is free and open-source because we believe software, and the tools to learn it, should be accessible to everyone.

This should make it easier for people to adapt the code we build here to answer creative questions, like “What if changing the state of a cell made a noise?”.

First of all, we declare some constants and global variables. Our lattice width and height will be 512 cells, and we’ll create a global variable to store our shader, our backbuffer, and our canvas.

const latticeWidth = 512
const latticeHeight = 512

let rockPaperScissorsShader
let backBuffer
let canvas

Next, we’re using p5.js’ preload method to load in our shader.

function preload(){
rockPaperScissorsShader = loadShader('rockPaperScissors/vertex.glsl', 'rockPaperScissors/fragment.glsl')
}

We then use p5.js’ setup method to:

  1. Create our canvas and backBuffer.
  2. Set our frame rate to 30fps, so everyone has a similar experience.
  3. Create our initial generation as an image of randomly red, green or blue pixels.
  4. Load our initial generation into our backbuffer.
function setup() {
canvas = createCanvas(latticeWidth, latticeHeight, WEBGL)
backBuffer = createGraphics(latticeWidth, latticeHeight, WEBGL)

frameRate(30)

let initialFrame = createImage(latticeWidth, latticeHeight)
initialFrame.loadPixels()
for (y = 0; y < initialFrame.height; y++) {
for (x = 0; x < initialFrame.width; x++) {
// The RGB and alpha (opacity) values of each pixel are stored in a single long list
// Work out where the RGBA values of this (x,y) coordinate start
let index = (x + y * latticeWidth) * 4
// index is the red value, green is at index+1, blue is at index+2
// Pick R, G or B randomly and set to maximum
initialFrame.pixels[index+random([0,1,2])] = 255
// We always want opacity to be full
initialFrame.pixels[index + 3] = 255
}
}
initialFrame.updatePixels()

backBuffer.clear()
backBuffer.image(initialFrame, latticeWidth * -0.5, latticeHeight * -0.5)
}

The last piece of the puzzle is to use p5.js’ draw method. Here we:

  1. Clear the canvas to get ready for a new generation.
  2. Apply our rockPaperScissorsShader and provide it with our backbuffer and the size of the lattice.
  3. Redraw the two triangles (a quad) that our shader will draw our cellular automaton onto.
  4. Clear out the backbuffer and load the new generation back into it.
function draw() {
clear()

shader(rockPaperScissorsShader)
rockPaperScissorsShader.setUniform('backBuffer', backBuffer)
rockPaperScissorsShader.setUniform("canvasSize", [latticeWidth, latticeHeight])

quad(0, 0, 1, 0, 1, 1, 0, 1)

backBuffer.clear();
backBuffer.image(canvas, latticeWidth * -0.5, latticeHeight * -0.5)
}

And voilà, we have our cellular automaton running in the browser:

Where Can I Go From Here?

If you’ve had a look at scissorspaper.rocks already you’ll have noticed I’m not using red, green and blue for each of the three states - each time you refresh the page, you get a new randomly generated colour theme. This is just one of the things you might want to change about the implementation we’ve walked through in this blog post.

If you’re interested in digging into what other changes I’ve made for scissorspaper.rocks, you can find all the source code on GitHub at two-twelve/scissorspaper.rocks. It’s MIT licensed, so do what you want with it. I’d love to see someone extend it to rock, paper, scissors, lizard, Spock!


  1. How is a vertex shader considered to be “shading” things? Consider a triangle with a reflective or iradescent surface - depending upon the perspective you view it from, it may appear a different colour. ↩︎