\( \newcommand{\Choose}[2]{{{#1}\choose{#2}}} \newcommand{\vecII}[2]{\left[\begin{array}{c} #1\\#2 \end{array}\right]} \newcommand{\vecIII}[3]{\left[\begin{array}{c} #1\\#2\\#3 \end{array}\right]} \newcommand{\vecIV}[4]{\left[\begin{array}{c} #1\\#2\\#3\\#4 \end{array}\right]} \newcommand{\matIIxII}[4]{\left[ \begin{array}{cc} #1 & #2 \\ #3 & #4 \end{array}\right]} \newcommand{\matIIIxIII}[9]{\left[ \begin{array}{ccc} #1 & #2 & #3 \\ #4 & #5 & #6 \\ #7 & #8 & #9 \end{array}\right]} \)

Reading on Transparency

Three.js makes a lot of this very easy. Nevertheless, it's important to understand the underlying implementation, even if the OpenGL code below is of no practical use. Read this for understanding, not for code details.

Alpha

The color of an object or material can have a fourth component, called alpha. This is notated the RGBA system or, occasionally, RGB$\alpha$.

The alpha component has no fixed meaning, but we will see today what meaning it almost always has, namely the opacity of the material:

An alpha buff is available on the graphics card and is part of the frame buffer. Three.js will take care of requesting an alpha buffer for us.

Blending

Given the pipeline model, we understand that at some moment during the rendering process, some of our objects have been drawn and exist only in the frame buffer and some of our objects have not yet been drawn. So, there is a time when the rendering of the next object is being combined with the rendering of some previous object. In the usual case, the new object's pixels overwrite the old pixels.

In general, though, OpenGL allows you to blend the two sets of pixels in the following way. The pixels already in the frame buffer are known as the destination pixels and a particular pixel is colored $(R_d,G_d,B_d,A_d)$. The new pixels are called the source pixels and a particular one is colored $(R_s,G_s,B_s,A_s)$. You can choose the blending factors, $s$ and $d$ so that the combined color is computed as: \[ \vecIV{R}{G}{B}{A} = d\vecIV{R_d}{G_d}{B_d}{A_d} + s\vecIV{R_d}{G_s}{B_s}{A_s} \]

The result components are clamped to the range $[0,1]$. The $s$ and $d$ factors are given to OpenGL using a constant from the following list, most of which we will ignore.

GL_ZERO
GL_ONE
GL_SRC_COLOR
GL_ONE_MINUS_SRC_COLOR
GL_SRC_ALPHA
GL_ONE_MINUS_SRC_ALPHA
GL_DST_ALPHA
GL_ONE_MINUS_DST_ALPHA
GL_DST_COLOR
GL_ONE_MINUS_DST_COLOR
GL_SRC_ALPHA_SATURATE
GL_CONSTANT_COLOR
GL_ONE_MINUS_CONSTANT_COLOR
GL_CONSTANT_ALPHA
GL_ONE_MINUS_CONSTANT_ALPHA

Note that any of these that need the destination ALPHA will require an ALPHA buffer. You also need to do

glEnable(GL_BLEND);
glBlendFunc(source_factor, destination_factor)

The default blend function is

glBlendFunc(GL_ONE, GL_ZERO)

which just replaces (overwrites) the destination with the source. See the documentation for glBlendFunc for more information. However, we will quote one important sentence from that man page:

Transparency is best implemented using blend function

  (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)

with primitives sorted from farthest to nearest.

Three.js sets this up for us by default, but you have control of the blending functions and weights. See the documentation (such as it is) on Material.

Let's try to understand the first half of the quote in the box above.

So, we use the alpha of the source object for the source factor and the complement for the destination. (By complement, I mean the rest of the whole, so the complement of $f$ is $1-f$.) In the next section, we'll try to understand the second half of that sentence.

First, you should try this tutor:

transparency tutor

This tutor lets you adjust the alpha values for three plans, drawn either furthest to nearest (in keeping with the advice from the man page). To see them in the reverse order, use the orbit controls to look at them from the other side.

Hidden Surface Elimination

Suppose we render a scene with surfaces that overlap or even interpenetrate. For example:

Here is the demo:

ball on a plane

How does a graphics system determine which color to use for any pixel? There are two major algorithms: depth sort, which is object, and depth buffer, which is pixel-based.

Depth Sort

The Depth Sort algorithm is sometimes called the painter's algorithm: imagine an artist painting a scene with oil paints. She would paint the farther stuff first (say, the table), then paint the ball right on top of the paint of the table.

This algorithm determines which object is farthest from the camera, draws that first, then the next, and so forth. Since the nearer stuff always overwrites the farther stuff, this works well. But:

To handle the latter issue, the algorithm sometimes breaks up objects into smaller pieces that don't interpenetrate, just so that it can sort them by distance. If we take that to its logical extreme, and re-organize our thinking, we come to the next algorithm.

Depth Buffer

The Depth Buffer algorithm is also called the Z-buffer algorithm, because Z is the dimension of depth (distance from the camera).

This algorithm uses extra storage so that, for each pixel, it can keep track of the depth of that pixel. (This buffer needs to be initialized to some maximum at the beginning of rendering.) Whenever we consider drawing a pixel, first compute the new depth and compare to the old depth (looking it up in the depth buffer). If the new depth is less, update the color buffer and the depth buffer. Computing the depth is easy, because we have the original $(x,y,z)$ coordinates of the object at the beginning of the transformation process, and we maintain all of them to the end.

Let's take an example, drawing a blue ball on a brown table. Suppose that the pixel in the center of the window is blue because it's part of the ball, but if the teapot weren't there it would be brown because the table also projects to that pixel. Here's how it works:

Now, there are two possibilities: we draw the ball first (and the table second) or vice versa.

So, this algorithm does the right thing regardless of the order that things are drawn and their distance from the camera.

Depth in OpenGL

OpenGL uses the depth buffer algorithm, AKA the $z$ buffer algorithm. Without Three.js, you would have to

  glutInitDisplayMode(... | GLUT_DEPTH)
  glEnable(GL_DEPTH_TEST)

Then, in your display function, you have to:

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)

This is what initializes the buffer to the maximum value (1.0).

Again, Three.js does this for you. We'll look at what this means, though.

Finally, the depth buffer is only updated if DepthMask is true. This is the default value, but you can turn it off in the Three.js WebGLRenderer and Material with:

renderer.setDepthWrite( boolean );
material.depthWrite = boolean;

Finally, the depth buffer algorithm is only used if DEPTH_TEST is true. This is the default value, but you can turn it off in the Three.js WebGLRenderer and Material with:

renderer.setDepthTest( boolean );
material.depthWrite = boolean;

(Three.js has both because it's possible in the scene object to set a parameter to override the material, in which case the renderer's values are used. The default is to use the setting in the material.)

A simple and useful demo is the following, which draws two quads that occupy the same space. By occupy the same space, I don't mean just that their 2D projections overlap, I mean that in the 3D world, their volumes overlap. (Since they are planes, their volumes are flat, but what I mean is that they are coincident in places.)

same depth

Because OpenGL retains depth information through projection (that's the $z$ coordinate), if the projections of two things overlap, OpenGL can still tell which one is in front. However, if the volumes coincide, it can't tell which one is in front because, in fact, neither one is. Therefore, if the depth test is enabled, OpenGL will make the decision based on the depth buffer, where tiny roundoff errors may differ from pixel to pixel, so that sometimes it decides that the red one is in front and sometimes the green one. Thus, we get a speckling effect or other random effect. If you turn off the depth test, one plane (the one drawn later) always wins. (Try moving the camera a little from left to right.)

Note that turning off the depth test isn't a really practical way to resolve this speckling issue. It's usually best to avoid this situation by having the planes be at slightly different distances. However, since it is a property of the material that you apply, you have a lot more control than you might think.

Depth and Transparency

The depth buffer algorithm has real trouble with transparency. Why?

If you update the depth buffer when you draw a transparent object, then an opaque object that is drawn later but is farther won't be drawn.

Pause to make sure we understand that, because it's dense. In fact, let's go back to our ball and table example, but now suppose the ball is partially transparent, which you can do using that demo.

So instead of seeing through the partially transparent ball to see the brown table, we see through the ball to see whatever color the framebuffer was cleared to at the beginning.

The only way to really win is to employ the Painter's algorithm, which means to sort all the objects by their depth and draw them farthest to nearest. Three.js does this by default. See the sortObjects method of the Three.js WebGLRenderer.

Thus, the basic approach is:

Depth Resolution

There are a limited number of bits in the depth buffer; the actual number depends on the graphics card. Quoting from the OpenGL Reference Manual page for gluPerspective

Depth buffer precision is affected by the values specified by zNear and zfar. The greater the ratio of zFar to zNear is, the less effective the depth buffer will be at distinguishing between surfaces that are near each other. If \[ r=\mbox{zFar}/\mbox{zNear} \]

roughly $\log_2 r$ bits of depth buffer precision are lost. Because $r$ approaches infinity as zNear approaches 0, zNear must never be set to 0.

So, even though it seems realistic to set near to zero (or nearly so) and far to infinity, the practical result is that the depth buffer algorithm won't be easily able to tell which of two surfaces is closer if they are similar in distance. We already know not to set near to zero, but this says we also shouldn't set far to $2^{32}$, because then we'd lose 32 bits of precision, which is probably all we have.

In practice, this means you will get the speckling effect even if the two surfaces are at different depths, if the difference is indistinguishable given the precision of the depth buffer.