Many factors lead to spatially varying blur kernels in a blurred image, such as camera shake, moving objects, and scene depth variation. The traditional camera shake removal methods either ignore the influence of varying depth values or object motion in dynamic scenes, while the methods not limited to removing camera shake always make simple assumptions about camera motion trajectory. We consider these factors in a unified framework, with the aid of an alternate-exposure capture strategy and simultaneously recorded inertial sensor readings. The inertial measurements relate the long-exposed blurred image to preceding and succeeding short-exposed noisy images. The special exposure arrangement effectively addresses the problem inherent in reconstructing camera motion from inertial measurements. In addition, the noisy image pair bracketing the blurred image is used for motion detection and initial depth map estimation, making the proposed method free of user interaction and additional expensive devices. Contrary to previous methods that individually parametrize the motion blur of the moving foreground layer and the static background layer, we exploit the fact that camera shake has a global influence to decompose the motion of the foreground layer such that a more tight constraint between the motion of layers is established. Given the motion and image data, we propose a single-energy model and minimize it using alternating optimization to estimate the spatially varying motion blur and the latent sharp image. Comparative experimental results demonstrate that our method outperforms conventional camera motion deblurring and object deblurring methods on both synthetic and real scenes.
This paper addresses the problem of removing spatially varying blur caused by camera motion with the help of inertial measurements recorded during exposure time. By utilizing a projective motion blur model, the camera motion is viewed as a sequence of projective transformations on the image plane, each of which can be estimated from the corresponding inertial data sample. Unfortunately, measurement noise leads to temporally increasing drift in the estimated motion trajectory and can significantly degrade the quality of recovered images. To address this issue, this paper employs capturing a small sequence of images with different exposure settings along with the recorded inertial data. A special arrangement of exposure settings is designed to anchor the correct position of the camera trajectory, followed by a drift correction step, which makes use of the sharp image structures preserved in one of the captured images. The effectiveness of our approach is demonstrated by conducting comparison experiments on both synthetic images and real images.
The electronic rolling shutter mechanism found in many digital cameras may result in spatially-varying blur kernels if camera motion occurs during an imaging exposure. However, existing deblurring algorithms cannot remove the blurs in this case since the blurred image doesn't typically meet the assumptions embedded in these algorithms. This paper attempts to address the problem of modeling and correcting non-uniform image blurs caused by the rolling shutter effect. We introduce a new operator and a mask matrix into the projective motion blur model to describe the blurring process of each row in the image. Based on this modified geometric model, an objective function is formulated and optimized in an alternating scheme. In addition, noisy accelerometer data along x and y directions is incorporated as a regularization term to constrain the solution. The effectiveness of this approach is demonstrated by experimental results on synthetic and real images.
Camera motion blur is a common problem in low-light imaging applications. It is diffcult to apply image restoration techniques without an accurate blur kernel. Recently, inertial sensors have been successfully utilized to estimate the blur function. However, the effectiveness of these restoration algorithms has been limited by lack of access to unprocessed raw image data obtained directly from the Bayer image sensor.<p> </p>In the work, raw CFA image data is acquired in conjunction with 3-axis acceleration data using a custom-built imaging system. The raw image data records the redistribution of light but is effected by camera motion and the rolling shutter mechanism. Through the use of acceleration data, the spread of light to neighboring pixels can be determined. We propose a new approach to jointly perform deblurring and demosaicking of the raw image. This approach adopts edge-preserving sparse prior in a MAP framework. The improvements brought by our algorithm is demonstrated by processing the data collected from the imaging system.