Tutorial: Shader simulations

shaderpicI was messing around with shaders for the ‘space game’, trying to figure out how to do a nice background shader for nebulae and the like. My attempts at a hydrodynamics shader weren’t working, so I just started messing around. The result is this¬†(uses WebGL).

Each pixel of the shader simulates the equations of motion of a tumbling object. When you have an object whose moments of inertia along its three principle axes aren’t balanced, then it does all sorts of crazy spinning and tumbling. This is something you’ve seen if you’ve played realistic flight simulators or Kerbal Space Program – if you’ve got something and it starts to spin, it doesn’t just spin around one axis, it wanders all over the place (or think of how a frisbee wobbles in flight). I then put a blur on top of the whole thing and used the results to make a bump map.

I don’t think this will end up in game, its too dizzying and doesn’t look ‘like anything’, but its neat how you can get spiral waves, ‘up’ and ‘down’ regions (the green/yellow ones are spinning around one pole of the object and the red/blue ones are just the flipped version of that), etc, all from the tumbling of an irregular object.

Since I had a bit of a hard time figuring out how to do all this, I’ve decided to make a short tutorial about it.

The Framework

I’m not going to go into too much detail about basic fragment/vertex shaders, because that’s a whole tutorial on its own. I’m also going to be using three.js to simplify things a lot. I’d recommend checking out the following series for a refresher:



The basic idea if you want to do some kind of simulation on the GPU is that you have to use textures to carry the relevant information. If you’re doing a fluid dynamics simulation, maybe you map red to the x velocity, green to the y velocity, and blue to the pressure. That won’t be enough on its own – you have to get the data back, so you can iterate on it. Fortunately, we can render into a texture, which we then can pass back into the shader.

So lets create a few buffers to start:

var simBuffer = new THREE.WebGLRenderTarget( 256, 256, 
   { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, 
    format: THREE.RGBFormat } );

var backBuffer = new THREE.WebGLRenderTarget( 256, 256, 
   { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, 
    format: THREE.RGBFormat } );

var simScene = new THREE.Scene();
var displayScene = new THREE.Scene();

We’ll need to pass our fragment shader the data texture, so we define a uniform for it as well.

var simUniforms = {"tDiffuse" : { type: "t", value: simBuffer } };

The ‘simBuffer‘ will be used for simulation data. The ‘backBuffer‘ will be used as the first render target, so we aren’t rendering into the texture as we’re using it (it’d create a nasty seam across the center of the field). The scenes are all just containers for a single quad with the right shader material. Our render loop will look like this:

var renderer = new THREE.WebGLRenderer();
renderer.autoClear = false;

var camera = new THREE.Camera();
var bufferState = 0;
function render() {
   if (!bufferState) {
      renderer.render( simScene, camera, backBuffer, false );
   } else {
      renderer.render( simScene, camera, simBuffer, false );

   renderer.render( displayScene, camera);

This tells three.js to render simScene into backBuffer, ¬†then swap the buffers around and then display the result. We have to make sure not to clear the renderer, because our textures are render targets (so clearing the renderer will also clear out our data). The ‘bufferState‘ variable here is basically used to switch back and forth between the backBuffer and the simBuffer. Essentially, after we fill the backBuffer with our data, we just tell the fragment shader to use that as the source next time and render into simBuffer instead. It should be more efficient to do it this way instead of doing another render pass.

We fill each scene with the relevant geometry, just enough to render out the next texture.

var simQuad = new THREE.Mesh(new THREE.PlaneGeometry(2,2,0),
                             new THREE.ShaderMaterial({
                                 uniforms: simUniforms,
                                 vertexShader: basicVertexShader,
                                 fragmentShader: simFragmentShader}));

var outQuad = new THREE.Mesh(new THREE.PlaneGeometry(2,2,0),
                             new THREE.MeshBasicMaterial({
                                 map: simBuffer }));

This scene will basically run ‘simFragmentShader‘ on the data stored in the simBuffer, and will (via the render loop) dump it into the backBuffer. The display scene renders the contents of the simBuffer – I’ve kept it this way for simplicity for the tutorial, but in general you will want to do something to the data to make it look better, like use it to create a normal map or something. You’d also want to give it a set of uniforms that you change depending on bufferState (this code will render the same data two frames in a row because I’m not flipping the render buffer). The other thing you’ll probably want to do is to initialize the buffers with some texture so you have something interesting to simulate on. You can use the following code to load the image ‘initial.png’ into the texture by basically doing a render into simBuffer before everything else starts. I’m using the callback here to start up the render loop.

var initTexture=THREE.ImageUtils.loadTexture("initial.png",{}, function() {
   var initbg = new THREE.Mesh(
          new THREE.PlaneGeometry(2,2,0),
          new THREE.MeshBasicMaterial({map: initTexture}));
   var initScene=new THREE.Scene();

The Shaders

First off, lets just throw a basic vertex shader in. All I need it to do is pass some UV coordinates to the fragment shader. Because this is javascript, basically you either have to put the shader text into a bunch of quoted, concatenated strings or embed it in your html file and read the text from the DOM element. I’m going to just put the shader code without the attending formatting stuff for readability, so use your favorite method.

varying vec2 vUv;
void main() { 
   vUv = uv; 
   gl_Position = projectionMatrix * 
      modelViewMatrix * vec4(position,1.0); }

The meat of the simulation is in the fragment shader. I’ll be using Euler’s equations for a tumbling body, which are:

dw1 / dt = (I2-I3)w2 w3/I1

dw2 / dt = (I3-I1)w3 w1/I2

dw3 / dt = (I1-I2)w1 w2/I3

We’re going to map red, green, and blue to the three angular velocities respectively. Instead of a range of 0 to 1, I want a range of -2 to 2, so I’ll make some functions to remap the value range.

varying vec2 vUv;
uniform sampler2D tDiffuse;

vec4 toW(vec4 w) { return 4.0*w-vec4(2.0); }
vec4 fromW(vec4 w) { return (w+vec4(2.0))/4.0; }

I’m now going to get the local value and all the neighboring values (for the blur). I define a step ‘h‘ which is the size of one pixel in UV coordinates (1./256.0), and use it to offset the UV at which I’m sampling the data buffer. The ability to do this makes it possible to do blurs and the like – basically, you can use it to compute numerical derivatives on the data. We’re not using that extensively here though, but its worth mentioning.

I’m also defining the three moments of inertia for the Euler equations here. More generally you could put this in as uniforms which would let you tune them in real time in the simulation.

void main() {
  float h=1.0/256.0;
  float I1=2.0, I2=3.5, I3=1.0;
  vec4 loc=toW(texture2D( tDiffuse, vUv ));
  vec4 lpx=toW(texture2D( tDiffuse, vec2(vUv.x+h,vUv.y) ));
  vec4 lmx=toW(texture2D( tDiffuse, vec2(vUv.x-h,vUv.y) ));
  vec4 lpy=toW(texture2D( tDiffuse, vec2(vUv.x,vUv.y+h) ));
  vec4 lmy=toW(texture2D( tDiffuse, vec2(vUv.x,vUv.y-h) ));

Now we’ll evaluate the differential equations themselves. We go from the differential equations to something discretized.

‘dw/dt = X’ becomes ‘w(t+dt) = w(t) + dt * X’

You could use higher-order schemes too, but this will generally do). So I want to compute that ‘X’ and then set my fragment color to w(t)+dt*X. I’m going to use a fixed timestep of 0.1 here; it seems to be numerically stable.

  float dt=0.1;
  vec4 dloc=vec4(0.0);

  dloc.x = loc.y*loc.z*(I2-I3)/I1;
  dloc.y = loc.z*loc.x*(I3-I1)/I2;
  dloc.z = loc.x*loc.y*(I1-I2)/I3;

Thats basically our on-site simulation right there. Of course now I want to blur it to make it look smoother, so I’ll average it a bit with its neighbors (by the way, this corresponds to adding a ‘Laplacian’ term, a second spatial derivative, which turns this into a diffusion equation of sorts).

  dloc = dloc + 0.75*(lpx+lpy+lmx+lmy-4.0*loc)/2.0;

Now, lets turn this back into something that fits in the texture’s color-space and output it!

  loc = fromW(loc + dt*dloc);
  loc = clamp(loc,0.0, 1.0);
  gl_FragColor = loc;

And that should be it!


I mentioned numerical derivatives briefly earlier. These are very important for doing simulations on the GPU of anything more than a bunch of coupled oscillators like we’ve done here, so I’m including a basic ‘bestiary’ of differential operators. I will use the notation psi(x,y) to refer to the value of some field at the coordinates x,y for these, and ‘h‘ returns as our pixel-size. The value ‘dx‘ is the physical spatial scale – how big each pixel is in physical units.

First the gradients. These are spatial derivatives that turn a scalar field into a vector. The gradient corresponds to the direction of fastest increase of the field. Because we’re going from something without direction (scalar) to something with direction (vector), the direction you use to evaluate the gradient is often important to the numerical stability of the algorithm if you are combining it with some other vector. Because of this, I include three definitions (forward, backward, and central).

For example, if you’re doing a differential equation with |grad psi|^2 in it, you want to use one forward and one backward gradient, or use the central gradient. This way you keep the symmetry of the equation. If on the other hand you had an advection equation, something with a term like v dot grad(psi), then for stability you might want to pick the gradient that is going in the opposite direction of ‘v’; using a central gradient would break the symmetry of the equations.

Forward grad(psi) ‘fgpsi’

fgpsi(x,y).x = (psi(x+h,y)-psi(x,y))/dx
fgpsi(x,y).y = (psi(x,y+h)-psi(x,y))/dx

Backward grad(psi) ‘fgpsi’

bgpsi(x,y).x = (psi(x,y)-psi(x-h,y))/dx
bgpsi(x,y).y = (psi(x,y)-psi(x,y-h))/dx

Central grad(psi) ‘fgpsi’

fgpsi(x,y).x = (psi(x+h,y)-psi(x-h,y))/(2.0*dx)
fgpsi(x,y).y = (psi(x,y+h)-psi(x,y-h))/(2.0*dx)

The second spatial derivative (the Laplacian) can be derived by combining two ‘half-step’ central gradients. These are used for diffusion equations and wave equations, and generally give a ‘blur’ or ‘spread uniformly to neighbors’ kind of effect. I include the 4-neighbor Laplacian here, but there’s also a 9-neighbor Laplacian thats a bit smoother (though more expensive to compute).


lapl(x,y) = (psi(x+h,y) + psi(x-h,y) + psi(x,y+h) + psi(x, y-h) – 4.0*psi(x,y))/(2.0*dx*dx)

Leave a comment