CS 332

Assignment 3
(optional)

Due: Thursday, September 24

This is an optional problem for Assignment 3 related to the measurement of motion in 2D images, in which you complete a function to compute the perpendicular components of motion and analyze the results of computing a 2D image velocity field from these components. You can complete this problem with your group, with any partner in the class that you select, or individually. This problem is worth 10 points of extra credit. The code and image files for this problem can be downloaded from this motion opt zip link. (The code can also be found in the /home/cs332/download/motionOpt folder on the CS file server.) After downloading this folder, set the Current Folder in MATLAB to the motionOpt folder. To submit your solutions, drag the motionOpt folder to the cs332/drop folder in your individual account on the CS file server.

Problem 3 (optional): Computing a Velocity Field

In class, we described an algorithm to compute 2-D velocity from the perpendicular components of motion, assuming that velocity is constant over extended regions in the image. Let (Vx,Vy) denote the 2D velocity, (uxi,uyi) denote the unit vector in the direction of the gradient (i.e. perpendicular to an edge) at the ith image location, and vi denote the perpendicular component of velocity at this location. In principle, from measurements of uxi, uyi and vi at two locations, we can compute Vx and Vy by solving the following two linear equations:

   Vx ux1 + Vy uy1 = v1
   Vx ux2 + Vy uy2 = v2

In practice, a better estimate of (Vx,Vy) can be obtained by integrating information from many locations and finding values for Vx and Vy that best fit a large number of measurements of the perpendicular components of motion. The function computeVelocity, which is already defined for you, implements this strategy. Details of this solution are described in an Appendix to this problem.

The function getMotionComps, which you will complete, computes the initial perpendicular components of motion. This function has three inputs - the first two are matrices containing the results of convolving two images with a Laplacian-of-Gaussian function. It is assumed that there are small movements between the original images. The third input to getMotionComps is a limit on the expected magnitude of the perpendicular components of motion (assume that a value larger than this limit is erroneous and should not be recorded). This function has three outputs that are matrices containing values of ux, uy and v. These quantities are computed only at the locations of zero-crossings of the second input convolution. At locations that do not correspond to zero-crossings, the value 0 is stored in the output matrices. The function definition contains ??? in several places where you should insert a simple MATLAB expression to complete the code statements. See the comments for instructions on completing each statement.

The motionTest.m script file contains two examples for testing your getMotionComps function. The first example uses images of a circle translating down and to the right. The expected results displayed for this example will be shown in class. The second example, which is initially in comments, uses a collage of four images of past Red Sox players, where each subimage has a different motion, as shown by the red arrows on the image below:

   

Big Papi (upper left) is shifting down and to the right, Manny (upper right) is shifting right, Varitek and Lowell (lower right) are shifting left, and Coco Crisp (lower left) is leaping up and to the left after a fly ball. For both examples, the velocities computed by the computeVelocity function are displayed by the displayV function in the motion folder, which uses the built-in quiver function to display arrows. Your results for the Red Sox image will roughly reflect the correct velocities within the four different regions of the image, but there will be significant errors in some places. Add comments to the motionTest.m script that answer the following questions: (1) where do most of the errors in the results occur? (2) why might you expect errors in these regions? Finally, the results will vary with the size of the neighborhood used to integrate measurements of the perpendicular components of motion. (3) what are possible advantages or disadvantages of using a larger or smaller neighborhood size for the computation of image velocity?

Appendix: Computing the Velocity Field in Practice

It was noted earlier that in principle, we can compute Vx and Vy by solving two equations of the form shown below, but in practice, a better estimate can be obtained by integrating information from many locations and finding values for Vx and Vy that best fit a large number of measurements of the perpendicular components of motion. Because of error in the image measurements, it is not possible to find values for Vx and Vy that exactly satisfy a large number of equations of the form:

   Vx uxi + Vy uyi = vi

Instead, we compute Vx and Vy that minimize the difference between the left- and right-hand sides of the above equation. In particular, we compute a velocity (Vx,Vy) that minimizes the following expression:

   ∑[Vx uxi + Vy uyi - vi]2

where denotes summation over all locations i. To minimize this expression, we compute the derivative of the above sum with respect to each of the two parameters Vx and Vy, and set these derivatives to zero. This analysis yields two linear equations in the two unknowns Vx and Vy:

   a1 Vx + b1 Vy = c1          a2 Vx + b2 Vy = c2

where

   a1 = ∑uxi2    b1 = a2 = ∑uxiuyi    b2 = ∑uyi2    c1 = ∑viuxi    c2 = ∑viuyi

The solution to these equations is given below, and implemented in the computeVelocity function:

   Vx = (c1b2 - b1c2)/(a1b2 - a2b1)          Vy = (a1c2 - a2c1)/(a1b2 - a2b1)