A light field camera, also known as plenoptic camera, captures information about the "light field emanating from a scene; that is, the intensity of light in a scene, and also the direction that the light rays are traveling in space. This contrasts with a conventional "camera, which records only light intensity.
One type of light field camera uses an array of micro-lenses placed in front of an otherwise conventional image sensor to sense intensity, color, and directional information. Multi-camera arrays are another type of light field camera. "Holograms are a type of film-based light field image.
The first light field camera was proposed by "Gabriel Lippmann in 1908. He called his concept ""integral photography". Lippmann's experimental results included crude integral photographs made by using a plastic sheet embossed with a regular array of microlenses, or by partially embedding very small glass beads, closely packed in a random pattern, into the surface of the "photographic emulsion.
In 1992, Adelson and Wang proposed the design of a plenoptic camera that can be used to significantly reduce the "correspondence problem in stereo matching. To achieve this, an array of microlenses is placed at the "focal plane of the camera main lens. The "image sensor is positioned slightly behind the microlenses. Using such images, the displacement of image parts that are not in focus can be analyzed and depth information can be extracted.
The "standard plenoptic camera" is a standardized mathematical model used by researchers to compare different types of plenoptic (or light field) cameras. By definition the "standard plenoptic camera" has microlenses placed one focal length away from the image plane of a sensor. Research has shown that its maximum baseline is confined to the main lens entrance pupil size which proves to be small compared to stereoscopic setups. This implies that the "standard plenoptic camera" may be intended for close range applications as it exhibits increased depth resolution at very close distances that can be metrically predicted based on the camera's parameters.
In 2004, a team at "Stanford University Computer Graphics Laboratory used a 16-megapixel camera with a 90,000-microlens array (meaning that each microlens covers about 175 pixels, and the final resolution is 90 kilopixels) to demonstrate that pictures can be refocused after they are taken.
Lumsdaine and Georgiev described the design of a type of plenoptic camera in which the microlens array can be positioned before or behind the focal plane of the main lens. This modification samples the light field in a way that trades "angular resolution for higher "spatial resolution. With this design, images can be post focused with a much higher spatial resolution than with images from the standard plenoptic camera. However, the lower angular resolution can introduce some unwanted aliasing artifacts.
A type of plenoptic camera using a low-cost printed film "mask instead of a microlens array was proposed by researchers at MERL in 2007. This design overcomes several limitations of microlens arrays in terms of "chromatic aberrations and loss of boundary pixels, and allows higher-spatial-resolution photos to be captured. However the mask-based design reduces the amount of light that reaches the image sensor compared to cameras based on microlens arrays.
Plenoptic cameras are good for imaging fast moving objects where auto focus may not work well, and for imaging objects where auto focus is not affordable or usable such as with security cameras. A recording from a "security camera based upon plenoptic technology could be used to produce an accurate 3D model of a subject.
"Lytro was founded by Stanford University Computer Graphics Laboratory alumnus "Ren Ng to commercialize the light field camera he developed as a graduate student there. Lytro has developed consumer light field digital cameras capable of capturing images using a plenoptic technique.
Pelican Imaging has thin multi-camera array systems intended for consumer electronics. Pelican's systems use from 4 to 16 closely spaced micro-cameras instead of a micro-lens array image sensor. "Nokia has invested in Pelican Imaging to produce a plenoptic camera system with 16-lens array camera expected to be implemented in Nokia "smartphones in 2014. More recently, Pelican has moved to designing supplementary cameras that add depth-sensing capabilities to a device's main camera, rather than stand-alone array cameras.
The "Adobe light field camera is a prototype 100-"megapixel camera that takes a "three-dimensional "photo of the scene in focus using 19 uniquely configured lenses. Each lens will take a 5.2-megapixel photo of the entire scene around the camera and each image can be focused later in any way.
The CAFADIS camera is a plenoptic camera developed by the "University of La Laguna (Spain). CAFADIS stands (in Spanish) for phase-distance camera, since it can be used for distance and optical "wavefront estimation. From a single shot it can produce several images refocused at different distances, depth maps, all-in-focus images and stereo pairs. A similar optical design can also be used in "adaptive optics in "astrophysics, in order to correct the "aberrations caused by "atmospheric turbulence in "telescope images. In order to perform these tasks, different "algorithms, running on "GPU and "FPGA, operate on the "raw image captured by the camera.
"Mitsubishi Electric Research Laboratories's (MERL) light field camera is based on the principle of "optical heterodyning and uses a printed film (mask) placed close to the sensor. Any hand-held camera can be converted into a light field camera using this technology by simply inserting a low-cost film on top of the sensor. A mask-based design avoids the problem of loss of resolution, since a high-resolution photo can be generated for the focused parts of the scene.
The modification of standard digital cameras requires little more than the capacity to produce suitable sheets of micro-lens material, hence a number of hobbyists have been able to produce cameras whose images can be processed to give either selective depth of field or direction information.
Stanford University Computer Graphics Laboratory has developed a light field microscope using a microlens array similar to the one used in the light field camera developed by the lab. The prototype is built around a "Nikon Eclipse "transmitted light microscope/wide-field "fluorescence microscope and standard "CCD cameras. Light field capturing ability is obtained by a module containing a microlens array and other optical components placed in the light path between the "objective lens and camera, with the final multifocused image rendered using "deconvolution. A later version of the prototype added a light field illumination system consisting of a video projector (allowing computational control of illumination) and a second microlens array in the illumination light path of the microscope. The addition of a light field illumination system both allowed for additional types of illumination (such as "oblique illumination and quasi-"dark-field) and correction for "optical aberrations.