\( \newcommand{\vecIII}[3]{\left[\begin{array}{c} #1\\#2\\#3 \end{array}\right]} \newcommand{\vecIV}[4]{\left[\begin{array}{c} #1\\#2\\#3\\#4 \end{array}\right]} \newcommand{\Choose}[2]{ { { #1 }\choose{ #2 } } } \newcommand{\vecII}[2]{\left[\begin{array}{c} #1\\#2 \end{array}\right]} \newcommand{\vecIII}[3]{\left[\begin{array}{c} #1\\#2\\#3 \end{array}\right]} \newcommand{\vecIV}[4]{\left[\begin{array}{c} #1\\#2\\#3\\#4 \end{array}\right]} \newcommand{\matIIxII}[4]{\left[ \begin{array}{cc} #1 & #2 \\ #3 & #4 \end{array}\right]} \newcommand{\matIIIxIII}[9]{\left[ \begin{array}{ccc} #1 & #2 & #3 \\ #4 & #5 & #6 \\ #7 & #8 & #9 \end{array}\right]} \)

Reading on Accumulation: Anti-Aliasing

Again, Three.js can do anti-aliasing for us, automatically, so read this for understanding rather than coding.

In Three.js anti-aliasing is a feature of the renderer. See this excellent side-by-side demonstration of antialiasing in three.js. Then, read the information below to understand what the code has done and how it did it.

The Accumulation Buffer

There are a number of effects that can be achieved if you can draw a scene more than once. You can do this by using the accumulation buffer. We will focus on anti-aliasing.

In OpenGL, you can request an accumulation buffer with

    glutInitDisplayMode(... | GLUT_ACCUM);

Conceptually, we're computing an image as a weighted sum (the summing is done in the accumulation buffer) of a series of frames: \[ \fbox{I} = f_1\fbox{frame 1}+f_2\fbox{frame 1}+\ldots+f_n\fbox{frame n}\]

That is, the final image, $I$, is a weighted sum of $n$ frames, where the fractional contribution of frame $i$ is $f_i$.

We initialize the accumulation buffer to zero, then add in each frame, with an associated constant, and then copy the result to the frame buffer. This is accomplished by the following sequence of steps:

  1. Clear the accumulation buffer with
    glClearAccum(r, g, b, a); // fill in the numbers you want, often zero
    glClear(GL_ACCUM_BUFFER_BIT);
    

    This is just like clearing the color buffer or the depth buffer. The accumulation buffer is like a running sum, though, so you will usually initialize it to zero.

  2. Add a frame into the accumulation buffer with:
    glAccum(GL_ACCUM, fi)
    

    This function allows other values (see the man page), but we'll use this today.

  3. Copy the accumulation buffer to the frame buffer:
     glAccum(GL_RETURN, 1.0); 

    The second argument is a multiplicative factor for the whole buffer. We'll always use 1.0.

Anti-Aliasing

An important use of the accumulation buffer is in anti-aliasing. Aliasing is the technical term for jaggies. It comes about because of the imposition of an arbitrary pixelation (rasterization) over a real world. For example, suppose we draw a roughly 2-pixel thick blue line at about a 30 degree angle to the horizontal. If we think about the rasterizing of that line, the situation might look like this:

a line at an angle partially covers pixels
A line at an angle partially covers certain pixels.

What's wrong with that? The problem comes with assigning colors to pixels. If we only make blue the pixels that are entirely covered by the line, we get something like this:

coloring only those pixels that are completely covered by the line makes for a jaggy, thin line
Coloring only those pixels that are completely covered by the line makes for a jaggy, thin line.

The line looks very jaggy and also very thin. It doesn't get better if we make blue the pixels that are covered by any part of the line:

coloring pixels that are completely covered by any of the line makes for a jaggy, thick line
Coloring pixels that are completely covered by any of the line makes for a jaggy, thick line.

What we want is to color the pixels that are partially covered by the line with a mixture of the line color and the background color, proportional to the amount that the line covers the pixel.

So, how can we do that? The idea of anti-aliasing using the accumulation buffer is:

  • The scene gets drawn multiple times with slight perturbations (called jittering), so that
  • Each pixel is a local average of the images that intersect it.

Generally speaking, you have to jitter by less than one pixel.

Here are two pictures; the one on the left lacks anti-aliasing, and the one on the right uses anti-aliasing.

Teapot without anti-aliasing Teapot with anti-aliasing
The image on the left lacks anti-aliasing; the image on the right uses anti-aliasing. If you focus closely, the one on the left has sharper edges with jaggies, but if you relax, the one on the right looks better.

The trouble with anti-aliasing by jittering the objects is that, because of the mathematics of projection,

  • objects that are too far (from the camera) jitter too little, and
  • objects that are too close jitter too much.

Better Anti-Aliasing

A better technique than jittering the objects is to jitter the camera, or more precisely, to modify the frustum just a little so that the pixels that images fall on are just slightly different. Again, we jitter by less than one pixel.

Here's a figure that may help:

Moving the frustum can do anti-aliasing.
Moving the frustum can do anti-aliasing.

The red and blue cameras differ only by an adjustment to the location of the frustum. The center of projection (the big black dot) hasn't changed, so all the rays still project to that point. The projection rays intersect the two frustums at different pixel values, though, so by averaging these images, we can anti-alias these projections.

How much should the two frustums differ, though? By less than one pixel. How can we move them by that amount? We only have control over left, right, top and bottom, and these are measured in world coordinates, not pixels. We need a conversion factor.

We can find an conversion factor in a simple way: the width of the frustum in pixels is just the width of the window (more precisely, the viewport or canvas), while the width of the frustum in world coordinates is just $\mathit{right}-\mathit{left}$. Therefore, the adjustment is: \[ \Delta x_\mathit{units} = \Delta x_\mathit{pixels} \frac{\mathit{right}-\mathit{left}}{\mathit{window~width}} \]

Here's the C code, adapted from the OpenGL Programming Guide. We won't need to do this in Three.js, so there's no need to convert to JavaScript.

void accCamera(GLfloat pixdx, GLfloat pixdy) {
    GLfloat dx, dy;
    GLint viewport[4];
    glGetIntegerv(GL_VIEWPORT, viewport);

    GLfloat windowWidth=viewport[2];
    GLfloat windowHeight=viewport[3];
    GLfloat frustumWidth=right-left;
    GLfloat frustumHeight=top-bottom;
    
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    dx = pixdx * frustumWidth/windowWidth;
    dy = pixdy * frustumHeight/windowHeight;
    printf("world delta = %f %f\n",dx,dy);
    glFrustum(left+dx, right+dx, bottom+dy, top+dy, near, far);
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
}

The important things to notice about that code are

  • The pixdx and pixdy values are the jitter amounts (distances) in pixels (sub-pixels, actually).
  • The frustum is altered in world coordinates
  • Therefore, dx and dy are computed in world coordinates corresponding to the desired distance in pixels.

Here's how we might use that. Again, this is C code; we won't need to do this in JavaScript because Three.js will take care of this for us.

void smoothDisplay() {
    int jitter;
    int numJitters = 8;
    glClear(GL_ACCUM_BUFFER_BIT);
    for(jitter=0; jitter < numJitters; jitter++) {
      accCamera(jitterTable[jitter][0],
                jitterTable[jitter][1]);
      glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
      drawObject();
      glAccum(GL_ACCUM, 1.0/numJitters);
    }
    glAccum(GL_RETURN, 1.0);
    glFlush();
    glutSwapBuffers();
}

This does just what you think:

  • We draw the image 8 times, each time adjusting the pixel jitter.
  • Each drawing has equal weight of 1/8.

There are lots of ways to imagine how the pixel jitter distances are computed. The domino pattern seems like a good idea for 8.

Jittering in a domino pattern seems like a good idea.
Jittering in a domino pattern seems like a good idea.

However, a paper on the subject argues for the following, which I have used in the past. One possible explanation is that we want to avoid regular patterns.

// From the OpenGL Programming Guide, first edition, table 10-5
var jitterTable = [
    [0.5625, 0.4375],
    [0.0625, 0.9375],
    [0.3125, 0.6875],
    [0.6875, 0.8124],
    [0.8125, 0.1875],
    [0.9375, 0.5625],
    [0.4375, 0.0625],
    [0.1875, 0.3125]
];
Jittering in this pattern is recommended.
Jittering in this pattern is recommended.

Here's a red teapot, with and without that kind of anti-aliasing, from an earlier version of this course:

Red teapot, with (on the left) and without (on the right) the
       recommended frustum jitter anti-aliasing
Red teapot, with (on the left) and without (on the right) the recommended frustum jitter anti-aliasing
  • Notice the difference in quality between the two images

This better approach to anti-aliasing works regardless of how far the object is from the center of projection, unlike the object-jitter we did before. Furthermore, we have a well-founded procedure for choosing the jitter amount, not just trial and error.

Motion Blur

Another use of the accumulation buffer is to sum a series of images in which an object is moving. This can simulate the blur that occurs with moving objects:

series of images of a moving object
series of images of a moving object

You can fade the images by using smaller coefficients on those images.

Here's an example of the output from an OpenGL program done like this:

falling teapot
A teapot falling off a table

The problem with using the accumulation buffer for large-motion motion blur is that we really want to turn all of the coefficients up, so that the first image doesn't fade away and the last isn't too pale, yet that's mathematically nonsense and also doesn't work (the table turns blue!). Nevertheless, for small-motion blur, it works pretty well.