The depth of field (DoF) effect is a useful tool in photography and cinematography because of its aesthetic value. However, capturing and displaying dynamic DoF effect were until recently a quality unique to expensive and bulky movie cameras. A computational approach to generate realistic DoF effects for mobile devices such as tablets is proposed. We first calibrate the rear-facing stereo cameras and rectify the stereo image pairs through FCam API, then generate a low-res disparity map using graph cuts stereo matching and subsequently upsample it via joint bilateral upsampling. Next, we generate a synthetic light field by warping the raw color image to nearby viewpoints, according to the corresponding values in the upsampled high-resolution disparity map. Finally, we render dynamic DoF effect on the tablet screen with light field rendering. The user can easily capture and generate desired DoF effects with arbitrary aperture sizes or focal depths using the tablet only, with no additional hardware or software required. The system has been examined in a variety of environments with satisfactory results, according to the subjective evaluation tests.
Multi-flash (MF) photography offers a number of advantages over regular photography including removing the
effects of illumination, color and texture as well as highlighting occlusion contours. Implementing MF photography
on mobile devices, however, is challenging due to their restricted form factors, limited synchronization
capabilities, low computational power and limited interface connectivity. In this paper, we present a novel mobile
MF technique that overcomes these limitations and achieves comparable performance as conventional MF. We
first construct a mobile flash ring using four LED lights and design a special mobile flash-camera synchronization
unit. The mobile device’s own flash first triggers the flash ring via an auxiliary photocell. The mobile flashes are
then triggered consecutively in sync with the mobile camera’s frame rate, to guarantee that each image is captured
with only one LED flash on. To process the acquired MF images, we further develop a class of fast mobile
image processing techniques for image registration, depth edge extraction, and edge-preserving smoothing. We
demonstrate our mobile MF on a number of mobile imaging applications, including occlusion detection, image
thumbnailing, image abstraction and object category classification.
The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful
example of the miniaturization aided by the increase in computational power characterizing mobile computational
photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused
on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level
perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the
camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image
rendering; in this context, artifacts and final image resolution are discussed.
Recent realizations of hand-held plenoptic cameras have given rise to previously unexplored effects in photography.
Designing a mobile phone plenoptic camera is becoming feasible with the significant increase of computing
power of mobile devices and the introduction of System on a Chip. However, capturing high numbers of views is
still impractical due to special requirements such as ultra-thin camera and low costs. In this paper, we analyze a
mobile plenoptic camera solution with a small number of views. Such a camera can produce a refocusable high
resolution final image if a depth map is generated for every pixel in the sparse set of views. With the captured
multi-view images, the obstacle to recovering a high-resolution depth is occlusions. To robustly resolve these, we
first analyze the behavior of pixels in such situations. We show that even under severe occlusion, one can still
distinguish different depth layers based on statistics. We estimate the depth of each pixel by discretizing the
space in the scene and conducting plane sweeping. Specifically, for each given depth, we gather all corresponding
pixels from other views and model the in-focus pixels as a Gaussian distribution. We show how it is possible to
distinguish occlusion pixels, and in-focus pixels in order to find the depths. Final depth maps are computed in
real scenes captured by a mobile plenoptic camera.
Conference Committee Involvement (2)
Digital Photography and Mobile Imaging XI
9 February 2015 | San Francisco, California, United States
Digital Photography X
3 February 2014 | San Francisco, California, United States