understanding and writing shaders for complex animations with Processing and GLGraphics
Last week-end I took part in a masterclass workshop organized by Processing Paris about shaders. The workshop was brilliantly led by v3ga (v3ga.net) with about 20 participants locked in a large room full of computers for 3 days.
A shader is a program that runs directly on the graphical card (GPU) of a computer. Using information sent by the processor (CPU), shaders are responsible for rasterizing and displaying any graphics elements displayed on a screen. The term shader comes from their first use on older computers, where they were mainly responsible for calculating levels of darkness of surfaces to create the illusion of depth. However, on recent operating systems, they are in charge of all the rendering stages, from simple geometry to camera transformations and the final pixel-color calculations.
Since they run on the GPU, shaders are very, very fast. However, being on the GPU also means they are not easy to work with. The syntax we used to write shaders in OpenGL (the default rendering engine in processing 2.0+) is GLSL, a far cry from processing in terms of accessibility. This langage is used both for vertex processing (geometry) and fragment processing (pixels). For this workshop, we used processing 1.5.1 with the library GLGraphics. On one side we wrote our processing sketches and at the same time in another programming environment we programmed the shaders that turned these sketches into pixels.
In some cases, it is better to do as much as possible with shaders. For example, instead of displaying an image in processing and then manipulate it in a shader (or worse, in processing), it is much more efficient to send that image as a texture to the GPU and do all the work with shaders. In the example above, the fragment shader (simple_frag.glsl) is receiving three values from processing : mouse, resolution and time. Sending this kind of data from processing to GLSL is simple:
// sending mouse position theShader.setVecUniform("mouse", float(mouseX), height-float(mouseY)); // sending screen resolution theShader.setVecUniform("resolution", float(width), float(height)); // sending a texture theShader.setTexUniform("texture", tex);
But it is not always a good idea to do everything in shaders. First of all, the syntax is not intuitive at all compared to processing. Also, there is no debugging tools when writing for OpenGL: when a semicolon is missing or a value is out of range, the shader just does not execute and doesn’t tell you where the error might be either (which is a huge pain to deal with). Also, since shaders are running directly on the hardware, it is not possible to run the app with a web browser with processing.js.
Having said that, shaders are really powerful and enable graphically complex programs to run smoothly on most hardware, a must when creating interactive systems. Unloading most of the work from the CPU helps reduce latency between a user action and the system’s feedbacks, and makes the whole experience much more pleasant. For example, the topmost image is a screenshot of an app where all the pixel work is done in the shaders. It runs at 200+ frames per seconds on 4 year old hardware, and is very reactive to mouse movements.
Also, since shaders are loaded dynamically at each frame, they can be modified or switched with one another while the program is running. This makes it possible to code live and completely change what is rendered with no perceptible delay. I will explore these possibilities in a future article.
The following images are screenshots from Processing/GLSL apps I wrote with v3ga’s workshop examples as references. Touch/hover on the images to see the animation and wait for a few seconds, these gif are about 500kb each. Since they are animated gifs and not programs, they run much more slowly than they do on my machine. When I get the time I will rewrite them in WebGL (probably with three.js by the way).
These last experimentations with typography were made with an eye-tracker in mind. Shaders are particularly adapted for eye-tracking because they execute so fast that they can keep-up with a user’s glance at a screen. This setup will be installed soon, and will be the topic of a future article here. In the meantime feel free to comment at the bottom of this post.
— The Processing library used in these examples glgraphics.sourceforge.net/
— An introduction to shaders in GLSL gamedev.net/…glsl_an_introduction/
— Projection matrix and camera references songho.ca/…/projectionmatrix.html
— Fantastic shaders running in WebGL with source code shadertoy.com/
— Understanding the implementation of shaders in Processing 2.0 trunk.processing.org/tutorials/pshader/