Particle System

This demo features an interactive particle system, where particles flow around according to some physics-inspired laws and interact with mouse/touch movements in real-time. The goal here is not to simulate anything out of reality, but rather something that is satisfying to play around with. A bit of digital art.

Simulation

We can simulate a simple gravitational field around the cursor by digging up Newton's law of universal gravitation:

F = G \frac{m_1 m_2}{r^2}

If we imagine that all particles have some small "mass", and the cursor has a very large one, we can ignore any gravitational effect that particles would have on each other — conveniently turning the problem from a computationally hard one to a trivial one. All particles can now be processed independently in parallel, and they only need to be aware of their own state and the cursor position. By inserting the equation above into another classical Newtonian equation: F = m a, we obtain a function for the acceleration of a particle:

a = G \frac{m_{particle} m_{cursor}}{r^2 m_{particle}} = G \frac{m_{cursor}}{r^2}

where r is the distance from the particle to the cursor. From here, we can abstract away the gravitational constant and cursor mass and combine them into some arbitrary constant \alpha that we choose based on what looks good and feels fun — let's call it our artistic constant. Now, mass is literally out of the equation.

Having obtained an expression for the acceleration of each particle at any point in time, what is left is simply to keep track of the position and velocity for each particle. We can express the situation with a simple differential equation with respect to time t:

a = \frac{dv}{d t} = \frac{d^2 p}{dt^2}

But since we are programming for the discrete world, we can spare ourselves the more rigorous calculus and solve it numerically, for each frame as follows:

a_i = \frac{\alpha}{r^2} \\ v_i = v_{i - 1} + \Delta t a_i \\ p_i = p_{i - 1} + \Delta t v_i

where \Delta t is the time delta between the current frame and the previous frame. And finally, in vector form:

\vec{r} = \vec{\vec{c}} - \vec{p_{i - 1}} \\ \vec{a_i} = \alpha \vec{r} / \norm{\vec{r}}^3 \\ \vec{v_i} = \vec{v_{i - 1}} + \Delta t \vec{a_i} \\ \vec{p_i} = \vec{p_{i - 1}} + \Delta t \vec{v_i}

which is already clear enough to serve as our pseudocode. \vec{c} here is the position of the cursor at the time of computation of the frame. This is essentially the entire computation step, except we also add a little damping, as well as a clamping effect, for stability. The clamping effect also serves to give a little pulsating effect around the cursor, like a stronger force around the nucleus of an atom, keeping the fastest particles from being hurled away into oblivion past the cursor at the same speed at which they came in. (Forgive the handwavy explanation, but today we are artists.)

WebGL implementation

The system supports particle counts in the order of millions — on powerful hardware hundreds of millions. To achieve this, not only the rendering but also the computation is done on the GPU, in WebGL. Rendering points in WebGL is straight-forward using the pipeline as intended - the vertex shader calculates the position and the fragment shader draws the point with a desired colour. However, the particle simulation step requires some additional thought. For each frame, we want each particle to remember its previous position (and maybe some additional parameters like velocity or acceleration), and update itself for the current frame. But in WebGL, this is not possible in a single render pass: The vertex shader does not have a way to keep track of such a state.

The solution is to have two separate render passes, one for computation and one for rendering. In our compute pass, we essentially need to trick WebGL that we are drawing something visually useful, where actually the output is just data that we never show. Instead of rendering it to the screen, we render it into a texture that we then give as input to our second pass.

The compute pass consists of a simple unit quad, rendered directly onto the render target. We make the render target have at least the same size as the number of particles in our system, so that each pixel corresponds to one particle. Then we clone the render target, so that we can swap between them on each frame: One is the input, the other is the output, and the next frame they swap places. This way, we can feed the previous frame's state into the shader of the current compute pass, which can read the previous state, make its changes and output the new state. The render pass will then use the new state as input to render the particles.

Compute pass swapping

Fig. 1: A diagram over how two consecutive frames are rendered, using the data textures in between. Between each frame, the textures swap places.

We can choose the datatype of the data texture pixels, so we are not limited to single-byte precision per colour channel, but we are still limited by the number of channels. This demo uses the RGBA format with 32-bit floating point precision. In other words, there's not a lot of room on board: We must make the particle state fit into a 4-dimensional vector with f32-typed components. Luckily, that's all the room we need. Since the particles exist in two dimensions, we can encode its position and velocity into the output pixel as follows:

(r, b, g, a) = (p_x, p_y, v_x, v_y).

Since acceleration is recomputed for every frame and only depends on the particle and cursor positions, this state is all we need to implement the simulation described above. In actual GLSL shader code, our compute pass might look roughly like this:

int i = int(gl_FragCoord.x);
int j = int(gl_FragCoord.y);
vec4 state = texelFetch(prevState, ivec2(gl_FragCoord.xy), 0);
vec2 p = state.rg;
vec2 v = state.ba;

vec2 r = mousePos - p;
float dist = length(r);

vec2 a = alpha / (dist * dist * dist) * r;
v += delta * a;
p += delta * v;

gl_FragColor = vec4(p, v);

And apart from some boilerplate, as well as the damping and clamping mentioned earlier, that's how it looks.

The next step is to actually render the particles. The easiest way is to use WebGL point primitives. They each correspond to a single vertex with a vertex shader, which can set the position and screen-size of each point. The fragment shader is then run for each pixel inside that area. Normally, the position would be given as an attribute that the vertex shader then transforms in some deterministic manner, but in our case we will ignore the position attribute and instead read out the particle state for the current vertex from our newly updated data texture, as follows:

precision highp float;

uniform int texWidth;
uniform float pointSize;
uniform sampler2D currState;

flat out vec2 v;

void main() {
    ivec2 coord = ivec2(gl_VertexID % texWidth, gl_VertexID / texWidth);
    vec4 state = texelFetch(currState, coord, 0);

    gl_PointSize = pointSize;
    gl_Position = vec4(state.rg, 0.0, 1.0);
    v = state.ba;
}

And that's it! The particle system will now work, as long as the fragment shader outputs some visible colour. To make things more interesting (and prettier), we choose this colour based on the speed of the particle, so that faster particles will be rendered with a brighter colour. To achieve this, we output the velocity as a varying to the fragment shader. By making the particles slightly transparent, we get a nicer, smoke-like effect when the number of points is large enough for the resolution:

Smoke-like effect with high point density and low opacity

Fig. 2: Smoke-like effect when setting a high point density but a low opacity

As playing around with these parameters is meant to be part of the user experience, the demo exposes a few more as settings: