In this project, we used image processing techniques that work on lightfields. I used the Tarot (coarse) dataset and the Jelly Beans dataset from The (New) Stanford Light Field Archive. These datasets contain 289 images of a single subject, taken from 289 positions in a 17x17 rectangular grid. The images are aligned and cropped, so that the center of the image is fixed.
First, I averaged all of the photos together, which produces an image with a very narrow depth of field. Then, I try averaging only part of the 17x17 grid of images, by taking a circular disk of positions, centered in the center of the image. As the disk’s size varies from large to small, the image becomes sharper. The aperture value is computed, assuming the focal length is the width of the grid (17) and the aperture is the disk of images to be averaged.
Next, I tried adjusting the focus of the images. The images in the dataset contained a camera offset (x, y) in units of millimeters. I shifted each image in the dataset by its camera offset (x, y), multiplied by a constant factor. By varying the constant factor from -0.7 to +0.7, I could change the focus plane of the image.
Here are two videos that demonstrate these effects.
I also ran the algorithms on another data set.
I wasn’t sure how to do the refocusing, until I tried moving my own head around in a rectangular grid pattern. Sometimes it is much easier to visualize the algorithm when you try to act it out yourself.