\( \newcommand{\vecIII}[3]{\left[\begin{array}{c} #1\\#2\\#3 \end{array}\right]} \newcommand{\vecIV}[4]{\left[\begin{array}{c} #1\\#2\\#3\\#4 \end{array}\right]} \newcommand{\Choose}[2]{ { { #1 }\choose{ #2 } } } \newcommand{\vecII}[2]{\left[\begin{array}{c} #1\\#2 \end{array}\right]} \newcommand{\vecIII}[3]{\left[\begin{array}{c} #1\\#2\\#3 \end{array}\right]} \newcommand{\vecIV}[4]{\left[\begin{array}{c} #1\\#2\\#3\\#4 \end{array}\right]} \newcommand{\matIIxII}[4]{\left[ \begin{array}{cc} #1 & #2 \\ #3 & #4 \end{array}\right]} \newcommand{\matIIIxIII}[9]{\left[ \begin{array}{ccc} #1 & #2 & #3 \\ #4 & #5 & #6 \\ #7 & #8 & #9 \end{array}\right]} \)

Reading: Texture Mapping

Texture mapping was one of the major innovations in CG in the 1990s. It allows us to add a lot of surface detail without adding a lot of geometric primitives (lines, vertices, faces). See how interesting Caroline's "loadedDemo" is with all the texture-mapping.

demo with textures
With textures
demo without textures
Without textures

In this reading, we'll begin with mapping a single image onto a plane, and then map multiple images onto the sides of a cube. We'll then take a short detour to learn how to deal with an issue known as Cross-Origin Resource Sharing (CORS) that arises when accessing images hosted on a different domain. Returning to texture mapping, we'll introduce the important concept of texture coordinates before moving on to the creation of repetitive texture patterns, mapping textures onto curved surfaces, and combining texture with other properties of surface material like color and reflectance.

Simple Image Mappings

Texture mapping paints a picture onto a polygon. Although the name is texture-mapping, the general approach simply takes an array of pixels (referred to as texels when used as a texture) and paints them onto the surface. An array of pixels is just a picture, which might be a texture like cloth or brick or grass, or it could be a picture of Homer Simpson. It might be something your program computes and uses, but more likely, it will be something that you load from an image file, such as a JPEG.

Consider this example of a floral scene mapped onto a plane:

floral scene on a plane

Here is the full code, with comments removed:

function displayPlane (texture) {
    var planeGeom = new THREE.PlaneGeometry(10,10);
    var planeMat = new THREE.MeshBasicMaterial({color: 0xffffff,
                                                map: texture});
    var planeMesh = new THREE.Mesh(planeGeom, planeMat);
    scene.add(planeMesh);
    TW.render();   
}

var scene = new THREE.Scene();
var renderer = new THREE.WebGLRenderer();
TW.mainInit(renderer,scene);
var state = TW.cameraSetup(renderer,
                           scene,
                           {minx: -5, maxx: 5,
                            miny: -5, maxy: 5,
                            minz: 0, maxz: 1});

var loader = new THREE.TextureLoader();
loader.load("flower0.jpg",
            function (texture) {
               displayPlane(texture);
            } );

A THREE.TextureLoader() object is used to load an image texture. The load() method loads the image with the specified URL (the flower0.jpg image is stored in the same folder as the code file), and when the image load is complete, this method invokes the callback function provided, with the texture as input. The specified function is an anonymous function in this case, and simply invokes the displayPlane() function with the loaded texture. The load() method uses an event handler that waits for an onLoad event indicating the completion of the image load, before invoking the given function. Why is this necessary? It takes time to load images, and if the code to render the scene were executed immediately after initiating the image load, the scene could be rendered before the image is done loading. As a consequence, the image texture would not appear on object surfaces in the graphics display.

The displayPlane() function creates a plane geometry and a material whose map property is assigned to the input texture. After creating a mesh and adding it to the scene, the TW.render() function is invoked to render the scene.

Suppose you want to use multiple image textures in your scene? The TW package provides a handy shortcut, TW.loadTextures() that loads multiple images, creates a texture for each one and stores the textures in an array. A callback function is also supplied that is invoked after all the images are loaded. This callback function is assumed to have an input that is the array of textures.

This demo maps three floral images onto the six sides of a cube:

flower images on a cube

Use your mouse to move the camera around to view all the sides of the cube. The following code shows the TW.loadTextures() function in action:

TW.loadTextures(["flower0.jpg", "flower1.jpg", "flower2.jpg"],
            function (textures) {
                displayBox(textures);
            } );

The first input is an array of file names for the multiple images. Here is the code for the displayBox() function:

function displayBox (textures) {
    // box geometry with three floral images texture-mapped onto sides
    var boxGeom = new THREE.BoxGeometry(10,10,10);
    // palette of possible textures to use for sides
    var materials = [new THREE.MeshBasicMaterial({color: 0xffffff,
                                                  map: textures[0]}),
                     new THREE.MeshBasicMaterial({color: 0xffffff,
                                                  map: textures[1]}),
                     new THREE.MeshBasicMaterial({color: 0xffffff,
                                                  map: textures[2]})
                    ];
    // array of 6 materials for the 6 sides of the box
    var boxMaterials = [materials[0], materials[0], materials[1],
                        materials[1], materials[2], materials[2]];
    var boxMesh = new THREE.Mesh(boxGeom, boxMaterials);
    scene.add(boxMesh);
    TW.render();    // render the scene
}

In this case, a "palette" (array) of three materials is first created, and then an array of six materials is constructed for the six faces of the cube (it takes some trial-and-error to determine the order in which the six faces are stored in the box geometry, in order to achieve a desired arrangement of the three image textures). Again, the scene is not rendered until the images are all loaded and the displayBox() function is invoked.

Loading Images and CORS

Here's a demo that is virtually identical to the example of the floral image mapped onto a plane, using the image of a cute Ragdoll kitten instead:

kitten on a plane

Sadly, you're not able to see the cute kitten in the demo, so I'll show you the image here, downloaded from the Wikipedia kitten page:

If you load the kitten demo with the JavaScript Console open, you'll see an error that includes the phrase, ... has been blocked by CORS policy .... This is the Same-Origin Policy, a security policy in web browsers. As stated on this Wikipedia page:

Under the policy, a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin. An origin is defined as a combination of URI scheme, host name, and port number. This policy prevents a malicious script on one page from obtaining access to sensitive data on another web page through that page's Document Object Model.

This policy covers XMLHttpRequests (Ajax requests) as well, which is where JavaScript code issues the request for the resource. So, even though the browser can request the image from the kitten Wikipedia page, our JavaScript code can't.

A solution could be CORS:

If the site we are loading an image from allows CORS, we should be able to do so by adding an extra header to the request, using the following:

THREE.ImageUtils.crossOrigin = "anonymous";  // or
THREE.ImageUtils.crossOrigin = "";           // the default

However, Scott has not yet been able to get this to work, so for now, just keep the following in mind:

Your images need to be on the same computer as your JavaScript program.

If you download an image from the web that you use in your code, it would be good to include a comment in the code file with the source.

Side notes: OpenGL and Three.js also expect that images will have dimensions that are a power-of-two (they need not be the same). If they're not, the image will be resized to power-of-two dimensions before texture mapping, as illustrated in this example of a Buffy image mapped onto a plane. Note the warning printed in the JavaSript Console and distortion of Buffy's image on the plane:

Buffy on a plane

In general, if the aspect ratios of your image and object surface are different (e.g. you map a square image onto an oblong plane), the image will be distorted along one of the dimensions. We'll return to this idea in the context of texture coordinates later in this reading.

Working Locally

The Three.js people are aware of the issue with this Same-Origin Policy, and they also know how nice it is to work locally, as you do on your own laptop. In their online documentation, they wrote this nice explanation of how to run things locally.

We will use the Run a local server option, using Python. The essentials are:

  • Start a terminal window
    • Mac: Open Launchpad (or press command-space) and search for terminal
    • PC: Click Start and search for Command Prompt (or press Ctrl-r, type cmd and click OK)
  • cd to the directory that has your downloaded HTML file in it, such as cd ~/Desktop.
  • Start a web server on port 8000 (by default) using Python:
          python -m SimpleHTTPServer
        
  • Go back to your web browser and try the following URL, substituting your HTML filename for the foo.html
          http://localhost:8000/foo.html
        

We will also experiment with this process in class.

Texture Coordinates

Suppose you want to repeat a texture pattern across a surface, or use only part of the image texture, or display the texture upside-down or mirror-reversed? To do these things, we need to introduce texture coordinates, sometimes called texture parameters.

A texture is an array of pixels, such as an image, and the coordinates of locations within a texture array are represented conceptually using a coordinate system that ranges from 0 to 1 in the horizontal and vertical directions. Typically, s and t are used to denote the horizontal and vertical coordinates, although (u,v) is also used (as in Three.js).

The diagram below shows two sample images that might be used for texture mapping, showing the texture coordinates:

Regardless of the aspect ratio of the image dimensions, the texture coordinates range from (0,0) in the upper left corner to (1,1) in the lower right corner.

Three.js has a default strategy for mapping from the above texture coordinates to the triangular faces of a built-in geometry, but often we want to control this mapping more carefully. We do this by specifying a pair of texture coordinates (s,t) for each vertex of the geometry. For geometries that we create from scratch, this step is essential.

In the next demo, a geometry is created that has a single triangular face, with a triangular portion of the above berries image texture-mapped onto the triangle:

berries on a triangle

Note that the portion of the image that is mapped into the triangle is taken from the bottom-left portion of the berries image. How was this specified?

Let's begin by recalling how vertices and faces are stored in a THREE.Geometry object. This diagram illustrates the triangle geometry created by the displayTriangle() function shown below:

function displayTriangle (texture) {
    // create a geometry with one triangular face that has
    // the berries image mapped onto this face
    var triGeom = new THREE.Geometry();
    triGeom.vertices.push(new THREE.Vector3(0,0,0));
    triGeom.vertices.push(new THREE.Vector3(4,0,0));
    triGeom.vertices.push(new THREE.Vector3(2,3,0));
    triGeom.faces.push(new THREE.Face3(0,1,2));

    // add a 3-element array of THREE.Vector2 objects
    // representing texture coordinates for the three
    // vertices of the face
    var uvs = [];
    uvs.push([new THREE.Vector2(0,1),
              new THREE.Vector2(0.5,1),
              new THREE.Vector2(0.25,0.25)]);
    // assign the faceVertexUvs property to an array 
    // containing the uvs array inside
    triGeom.faceVertexUvs = [uvs];
    // by default, Three.js flips images upside-down, so
    // you may want to set the flipY property to false
    texture.flipY = false;

    var triMat = new THREE.MeshBasicMaterial({color: 0xffffff,
                                              map: texture});
    var triMesh = new THREE.Mesh(triGeom, triMat);
    scene.add(triMesh);
    TW.render();    // render the scene
}

The new element of the triangle geometry is the assignment of the faceVertexUvs property (defined for the THREE.Geometry class) to an array that has an array tucked inside that stores a 3-element array of THREE.Vector2 objects, each specifying the texture coordinates (in the range from 0 to 1) for the three vertices of the triangle face. The pictures below show the berries image with the triangular region selected for texture mapping outlined in yellow, the triangle geometry with the corresponding texture coordinates shown in blue for each vertex, and a depiction of the faceVertexUvs property.

 

Note that Three.js flips images upside-down when loading, but setting the flipY property for the texture to false reverses this change.

The next example maps four different images onto the four triangular faces of a tetrahedron geometry created from scratch. The addTextureCoords() function adds texture coordinates for each of the four faces, using a helper function faceCoords() that adds these coordinates for a single face to an array named UVs being constructed for the faceVertexUvs property described above. Be sure to examine the source code for this demo:

tetrahedron with four images mapped onto its faces

Repeating Textures

Suppose we want to create the appearance of a natural repetitive texture, such as that shown on the sides of the barn below. We can achieve this effect by tiling the surface with multiple copies of a small snippet of texture, like that shown to the right of the barn.

barn     texture

In Three.js, there are at least two ways to repeat a texture on an object surface:

  • set the repeat property of a THREE.Texture object to indicate the number of times to repeat the texture in the horizontal and vertical directions
  • set the texture coordinates s and t associated with verticies in the Geometry to have values larger than 1

In both cases, we also need to specify how the texture is wrapped horizontally and vertically.

In the following demo, the floral image is again mapped onto a plane, but repeated four times in the horizontal direction and twice in the vertical direction:

plane with repeated flower images

The following four code statements were added to the displayPlane() function defined earlier:

texture.repeat.set(4,2);
texture.wrapS = THREE.RepeatWrapping;
texture.wrapT = THREE.RepeatWrapping;
texture.needsUpdate = true;

texture is the THREE.Texture object created from the floral image. The wrapS and wrapT properties control how the texture is wrapped in the horizontal (s) and vertical (t) directions. The needsUpdate property often needs to be set to true when working with repeated textures, so we also set it here.

This picture from wrap-mode.html illustrates the three wrapping methods that can be used (the default is THREE.ClampToEdgeWrapping, which is rarely used):

wrapping modes

The following demo illustrates how to repeat an image texture by directly manipulating the texture coordinates:

texture repetition by manipulating texture coordinates

In the code, you'll see that the coordinates specified for s are 0 or 4 (indicating four repeats in the horizontal direction) and the coordinates specified for t are 0 or 2 (two repeats in the vertical direction).

Texture Tutor

Here is a basic tutor for texture mapping as we know it so far, based on a tutor by Nate Robins — experiment with the GUI controls to understand the basic concepts:

Mapping Textures onto Curved Surfaces

In Three.js, textures can be mapped onto curved objects (e.g. spheres, cones, or cylinders) in the same way they're mapped onto flat surfaces, by setting the map property for the material to a THREE.Texture object. See my world globe and (somewhat marginal) pine tree:

world map on a sphere

textured pine tree

Lighting and Textures

So far, we've just mapped textures onto plain white surfaces. In fact, the texture is multiplied by the color of the surface. Consider the following demo:

Buffy on a Colored Plane

Now, if the color of the face isn't direct color (THREE.MeshBasicMaterial), but is a function of the material (THREE.MeshPhongMaterial) and lighting of the scene, you can easily see how we can combine this lighting information with a texture.

One question, though, is what color the material should be. If the material has any hue, it might interact in odd-looking ways with the colors of the texture. Thus, it makes sense for the material to be gray. It should probably be a fairly light shade of gray, maybe even white, since lighting works by multiplying the material by a value less than one, so typically the result is darker than the original. However, it also depends on how many lights are in the scene, since the contributions of all the lights are added up, so colors can also get brighter, even over-driven. So, there's still some artistic judgment involved.

Consider the following demo:

Buffy on a Spotlit Plane

The trick here is to create a Phong material and then to set the .map property:

var mat = new THREE.MeshPhongMaterial();  // default is white
mat.map = texture;

Nearest and Linear Filters

When mapping texture onto a triangular face during the rendering process, Three.js:

  1. first determines which texture coordinates to use for each pixel touched by the triangular face
  2. then determines which texture color to use for each pixel, based on the texels (texture elements) around the computed texture coordinates

Pixels in the triangular face could be larger or smaller than the corresponding texels. In the picture below, the texture pattern on the left is very coarse and each texel is larger than the pixels in the image being rendered (shown in the center). For the texture pattern on the right, the texels are smaller than the pixels being rendered. (A couple pixel-sized elements are shown superimposed in green on the two texture patterns.)

texture filters

The minFilter property of a THREE.Texture object controls how the texture color is determined for the scenario on the left, and the magFilter property specifies what to do for the scenario on the right. Two common options for both are THREE.NearestFilter (select the color of the nearest texel) and THREE.LinearFilter (linearly interpolate between the colors of the four nearest texels). The two options can appear quite different, as shown in the following demo:

choice of nearest texture color or linear interpolation between neighboring colors

Summary

Here are the key ideas on texture mapping:

  • At its most basic level, a texture is an array of pixels, similar to an image
  • Each vertex of a triangular face can have texture coordinates - these coordinates are 2D (e.g. (s,t) or (u,v)) and lie within the [0,1] interval
  • Texture coordinates outside the [0,1] interval can specify a pattern of repetition that can be
    • clamped to the edge
    • repeated
    • mirrored
  • The repetition method is specified by the wrapS and wrapT properties of the THREE.Texture object
  • Textures can also be repeated by setting the repeat property for the THREE.Texture object
  • When loading an image to use for texture mapping, we need to consider that it takes a non-negligible amount of time for the image to load, so we need to write event handlers for the after load event that render the scene after all images are loaded
  • When accessing images hosted on a different domain, our JavaScript code bumps up against the Same-Origin Policy. When loading images from a local machine, we can start up a web server using Python's SimpleHTTPServer module
  • Textures interact with lighting and other material properties (e.g. Phong material) of the surfaces
  • When accessing the texture array using texture coordinates, we can either take the value of the nearest texel, or we can linearly interpolate over the four neighbors
  • Finally, we can map textures onto flat or curved surfaces

In class, we'll see how we can create synthetic texture patterns and briefly touch on some advanced methods for texture mapping that use "bump maps", "normal maps", and "environment maps".