CS 332

Assignment 4

Due: Thursday, October 1

This assignment has two problems related to visual recognition. The first explores the performance of the Eigenfaces method for recognizing faces, which is based on Principal Component Analysis (PCA). The second problem explores the design and analysis of artificial neural networks to recognize handwritten digits from images. You do not need to write any code for either of these problems. The code files for both problems are stored in a folder named recognition that can be downloaded through this recognition zip link. (This link is also posted on the course schedule page and the recognition folder is also stored in the /home/cs332/download directory on the CS file server.)

Submission details: For Problem 1, create a shared Google doc with your partners. You can then submit this work by sharing the Google doc with me — please give me Edit privileges so that I can provide feedback directly in the Google doc. You are given a separate Google doc to complete for Problem 2.

Problem 1: Eigenfaces for Recognition

In this problem, you will explore the behavior of the Eigenfaces approach to face recognition proposed by Turk and Pentland, which is based on Principal Component Analysis (PCA). You do not need to write any code for this problem — you will explore the method with a GUI based program that a previous CS332 student, Isabel D'Alessandro '18, helped to create. To begin, set the Current Folder in MATLAB to the Eigenfaces subfolder inside the Recognition folder. To run the GUI program, enter facesGUI in the MATLAB Command Window. When you are done, click on the close button on the GUI display to terminate the program. The figure below shows a snapshot of the program in action:

            

The program uses an early version of the Yale Face Database that consists of 165 grayscale images (in GIF format) of 15 different people (I'm sorry there is only one woman in the database!). There are 11 images per person, taken under different conditions (central light source or lighting from the left or right; neutral, sad, happy, and surprised expressions; with and without glasses; sleepy and winking). I created two additional images of each person by rotating the neutral-expression images by 15 degrees in the clockwise and counterclockwise directions.

A subset of 7 images for each of the 15 people (omitting the left/right lighting conditions, happy/surprised emotions, and rotated images) is used as the training set to compute the eigenfaces (principal components) that capture the variation across this dataset of face images.

In class, we showed how an image can be expressed as the sum of an average face and a weighted sum of a subset of the eigenfaces. If you click the Generate Faces button, two randomly selected images from the training set will be displayed in the upper left corner of the GUI. The average face computed from the full training set is shown in the two display areas at the bottom of the GUI window. In the center, you will see the first eigenface, with the weights associated with this eigenface, obtained for the two face images shown. The Add Eigenface button will be enabled, allowing you to incrementally add each eigenface (with associated weights) to the average faces at the bottom, using the individual weights for each of the two face images. As you continue to click on the Add Eigenface button, you will see the two individual identities emerge.

Once the eigenfaces, or principal components, are computed, we can then try to recognize the person depicted in a novel image, and examine how the representation generalizes to handle, for example, different lighting, expressions, and orientations of a face. Using the popup menu to the right of the Test Set label, you can select one of three different test sets. View the face images that comprise each test set by making a selection and then clicking the View Test button (you will see a skinny window with two columns of face images). If you click on the Run Test button, the percentage of the test images that are correctly identified will be printed in the text box below this button. You can also modify the number of eigenfaces that are used to represent each of the training and test images. Based on the accuracy results obtained for different choices of test set and number of eigenfaces used, answer the final questions below.

Problem 2: Recognizing Handwritten Digits with Neural Nets

This problem is described in the Google doc named Assignment 4, Problem 2: Neural Networks in our shared Google folder.