CS 332

Assignment 3

Due: Thursday, September 24

This assignment contains two problems on the analysis of visual motion. In the first problem, you will implement a stategy to track the moving cars in a video of an aerial view of a traffic scene, and visualize the results. In the second problem, you will explore the 2D image velocity field that is generated by the motion of an observer relative to a stationary scene, and the computation of the observer's heading from this velocity field. You do not need to write any code for the second problem. The code files for both problems are all stored in a folder named motion that you can download through this motion zip link. (This link is also posted on the course schedule page and the motion folder is also stored in the /home/cs332/download/ directory on the CS file server.)

Submission details: Submit an electronic copy of your code files for Problem 1 by dragging the tracking subfolder from the motion folder into the cs332/drop subdirectory in your account on the CS file server. The code for this problem can be completed collaboratively with your partner(s), but each of you should drop a copy of this folder into your own account on the file server. Be sure to document any code that you write. For your answers to the questions in Problem 2, create a shared Google doc with your partner(s). You can then submit this work by sharing the Google doc with me — please give me Edit privileges to that I can provide feedback directly in the Google doc.

Problem 1 (60 points): Tracking Moving Objects

The tracking subfolder inside the motion folder contains a video file named sequence.mpg that was obtained from a static camera mounted on a building high above an intersection. The first image frame of the video is shown below:

While you are working on this problem, set the Current Folder in MATLAB to the tracking subfolder. The code file named getVideoImages.m contains a script that reads the video file into MATLAB, shows the movie in a figure window, extracts three images from the file (frames 1, 5, and 9 of the video), displays the first image using imtool, and shows a simple movie of the three extracted images, cycling back and forth five times through the images. We will go over the getVideoImages.m code file in class, which uses the concept of a structure in MATLAB, and the built-in functions VideoReader, struct, read, and movie.

Most of the visual scene is stationary, but there are a few moving cars and pedestrians, and a changing clock in the bottom right corner. Your task is to detect the moving cars and determine their movement over the three image frames stored in the variables im1, im2, and im3. (The clock has been removed from these three images.) To solve this problem, you will use a strategy that takes advantage of the fact that most of the scene is stationary, so changes in the images over time occur mainly in the vicinity of moving objects. The image regions that are likely to contain the moving cars are fairly large regions that are changing over time.

Create a new script named trackCars.m in the tracking subfolder, to place your code to analyze and display the movement of the cars across the three images provided. (You are welcome to define separate functions for subtasks, but this is not necessary.) Implement a solution strategy incorporating the following steps:

Hints: The file codeTips.m in the tracking folder provides simple examples of some helpful coding strategies, including examples that use the built-in bwlabel and regionprops functions, access information stored in a vector of structures, and superimpose graphics (using the built-in plot and scatter functions), on an image that is displayed in a figure window.

Be sure to comment your code so that your solution strategy is clear! (You are welcome to use a different strategy than the one outlined above.)

Problem 2 (40 points): Observer Motion

In this problem, you will explore the 2D image velocity field that is generated by the motion of an observer relative to a stationary scene, and the computation of the observer's heading from this velocity field. You do not need to write any code for this problem. You will be working with a MATLAB program that has a graphical user interface as shown below:

   

To begin, set the Current Folder in MATLAB to the observerMotion subfolder in the motion folder. To run the GUI program, just enter observerMotionGUI in the MATLAB command window. When you are done, click on the close button on the GUI display to terminate the program.

The GUI has six sliders that you can use to adjust the parameters of movement of the observer — the observer's translation in the x,y,z directions (denoted by Tx,Ty,Tz) and their rotation about the three coordinate axes (denoted by Rx,Ry,Rz). Each parameter has a different range of possible values that are controlled by the sliders and displayed along the right column of the GUI, e.g. Tx and Ty range from -6 to +6, Tz ranges from 0 to 4, Rx and Ry range from -0.25 to +0.25, and Rz ranges from -0.5 to +0.5. The 3D scene consists of a square surface in the center of the field of view whose depth is specified by Z-in that can range from 1 to 2, surrounded by a surface whose depth is specified by Z-out that can range from 2 to 4.

After setting the motion and depth parameters to a set of desired values using the sliders, you can click on the "display velocities" button to view the resulting velocity field, over a limited field of view defined by coordinates ranging from -5 to +5 in the horizontal and vertical directions. The coordinates (0,0) represent the center of the visual field corresponding to the observer's direction of gaze. If the true focus of expansion (FOE), indicating the observer's heading point, is located within this field of view, a red dot will be displayed at this location. Otherwise, the message "true FOE out of bounds" will appear in the pink message box near the lower right corner of the GUI.

After displaying the velocity field, you can click on the "compute FOE" button. The program runs an algorithm for computing the observer's heading that is based on the detection of significant changes in image velocity, as suggested by Longuet-Higgins and Prazdny. This algorithm first computes the differences in velocity between nearby locations in the image, and then combines the directions of large velocity differences to compute the FOE by finding the location that best captures the intersection between these directions. Large velocity differences will be found along the border of the inner square surface when there is a significant change in depth across the border. The velocity difference vectors will be displayed in green, superimposed on the image velocity field, with a green circle at the location of the computed FOE, as shown in the above figure. If the computed FOE is located outside the limited field of view that is shown in the display, the message "computed FOE out of bounds" will appear in the pink message box. If no large velocity differences are found that can be used to compute the FOE, the message "no large velocity differences" will appear in the message box.

In the exercises below, when I indicate that parameters should be set to particular values, use the associated sliders to reach values that are close to the desired values — they do not need to be exact. Note that the overall size of the graphing window changes slightly when showing the velocity field on its own ("display velocities" button) vs. showing the velocity field with the computed FOE and velocity difference vectors superimposed ("compute FOE" button) — when relevant, pay close attention to the actual x,y coordinates on the axes.

(1) Observe the velocity field obtained for each motion parameter on its own. First be sure that Z-in is set to 1 and Z-out is set to 2 (their initial values). Then set five of the six motion parameters to 0 (or close to 0) and view the velocity field obtained when the sixth parameter is set to each of its two extreme values (for Tz, use 4 and a small non-zero value). For each parameter, what is the overall pattern of image motion, and are there significant changes in velocity around the border of the central square? Which parameter(s) yield(s) a computable FOE in the field of view? Why is it not possible to compute an FOE for some motion parameters?

(2) In class, we noted that depth changes in the scene are needed to recover heading correctly. Set the five parameters Tx, Ty, Rx, Ry, and Rz to 0, and set Tz to 4. Then adjust the relative depth of the two surfaces. First set both Z-in and Z-out to 2. Can an FOE be computed in this case? Slowly change one of the two depths (either slowly decrease Z-in or slowly increase Z-out). How much change in depth is needed to compute the FOE in this case?

(3) This question further probes the need for depth changes. Again set both Z-in and Z-out to 2. Then examine the following two scenarios:

    (a) Ty = Rx = Ry = Rz = 0 and Tx = 6, Tz = 1.5

    (b) Same parameters as in (a) except that Ry = -0.25

What are the coordinates of the true FOE in each case? You'll observe that in both cases, the FOE cannot be computed because there are "no large velocity differences." In case (a) the observer is only translating and the overall velocity field expands outwards from the true FOE. This expanding pattern on its own could be used to infer the observer's heading, even in the absence of velocity differences due to depth changes in the scene. In case (b), suppose you searched for a center of expansion of the velocity field, where might you find such a point, and would it correspond to the observer's true heading point (true FOE)? How do the results in both cases change when you now introduce a depth change, i.e. set Z-in to 1 and set Z-out to 2?