Digital cameras under a dark illumination invoke artifacts like motion blur in a long-exposed shot or salient noise
corruption in a short-exposed (High ISO) shot. To suppress such artifacts effectively, multi-frame fusion approaches
involving the use of multiple short-exposed images has been studied actively. Moreover, it recently has been being
applied to various consumer digital cameras for the practical still-shot stabilization. However, it requires too much
computational complexities and costs in order to conduct both multiframe noise filtering and brightness/color appearance restoration well from a set of multiple input images acquired at a harsh low-light situation.
In this paper, we propose a new fusion-based low-light stabilization approach, which inputs one proper-/long-exposure blurry image as well as multiple short-exposure noisy images. First, a coarse-to-fine motion compensated noise filtering is done to get a clean image from the multiple short-exposure images. Then, online low-light image restoration is followed to obtain a good visual appearance from the denoised image using a blurry long-exposure input image. More specifically, the noise filtering is conducted by a simple block-wise temporal averaging based on a between-frame motion info, which provides a denoising result with even better detail preservation. Our simulation and real scene tests show the possibility of the proposed algorithm for fast and effective low light stabilization at a programmable computing platform.
Motion blur is usually modeled as the convolution of a latent image with a motion blur kernel, and most of
current deblurring methods limit types of motion blurs to be uniform with the convolution model. However,
real motion blurs are often non-uniform, and in consequence the methods may not well remove real motion
blurs caused by camera shakes. To utilize the existing methods in practice, it is necessary to understand how
much the uniform motions (i.e., translations) can approximate real camera shakes. In this paper, we analyze the
displacement of real camera motions on image pixels and present the practical coverage of uniform motions (i.e.,
translations) to approximate complicated real camera shakes. We first analyze mathematically the difference of
the motion displacement between the optical axis and image boundary under real camera shakes, then derive
the practical coverage of uniform motion deblurring methods when used for real blurred images. The coverage
can effectively guide how much one can utilize the existing uniform motion deblurring methods, and informs the
need to model real camera shakes accurately rather than assuming uniform motions.
The Lucas-Kanade algorithm and its variants have been successfully used for numerous works in computer vision,
which include image registration as a component in the process. In this paper, we propose a Lucas-Kanade based
image registration method using camera parameters. We decompose a homography into camera intrinsic and
extrinsic parameters, and assume that the intrinsic parameters are given, e.g., from the EXIF information of
a photograph. We then estimate only the extrinsic parameters for image registration, considering two types of
camera motions, 3D rotations and full 3D motions with translations and rotations. As the known information
about the camera is fully utilized, the proposed method can perform image registration more reliably. In addition,
as the number of extrinsic parameters is smaller than the number of homography elements, our method runs
faster than the Lucas-Kanade based registration method that estimates a homography itself.