Image demosaicing and denoising are two important processes in the ISP pipeline of mobile cameras, because almost all mobile cameras in use today require colorful images generated by demosaicing algorithm, and the small sensor area of mobile cameras triggers low signal-to-noise ratio. Over the years, a considerable number of sequential demosaicing and denoising methods have been proposed, while they suffer from estimating the noise distribution and adjusting the hyper-parameters in order to balance demosaicing and denoising. There exit simultaneous demosaicing and denoising methods solving these problems. But they lack guidelines designed for mobile cameras. We propose a Plug-and-Play (PnP) demosaicing and denoising method on mobile cameras. Our method is built on PnP demosaicing framework which is derived from variable splitting theory. Any color demosaicing algorithm (i.e., bilinear, Malvar) can be plugged into our framework. We novelly trained an ISO conditioned denoiser for the framework and iteratively apply the denoiser in it. The ISO conditioned denoiser not only removes noise from the demosaicing procedure itself but also noise from camera sensors. By introducing ISO settings to the denoiser, our method takes possession of the adaptability and robustness in various capturing environments under different camera settings. Our method has only two hyperparameters to tune, which eases the hyper-parameter adjustment in sequential demosaicing and denoising methods. Extensive experiments on synthetic datasets show that our method performs better than sequential demosaicing and denoising methods and is practical for mobile cameras.
KEYWORDS: Image enhancement, RGB color model, High dynamic range imaging, Visualization, Visual process modeling, Image quality, Visibility, Image analysis
The images captured from environment often suffer from low contrast and visual quality due to the bad imaging conditions like low light or haze weather. Many methods have been proposed based on traditional image enhancement models including dehazing model and Retinex model typically. However, their scopes are limited and specific. In this paper, we propose a simple but effective method to enhance images contrast and keep the good visual quality. By observing the traditional image enhancement models including dehazing model and Retinex model, a general normalized model is proposed. To preserve the image details and control the brightness, we introduce dual boundaries called the dark and bright boundary to handle the low light and high light condition. After getting the dark and bright boundary, the images are enhanced accordingly. Experiments show our method can be applied in many bad imaging conditions and keep good performances.
A generative adversarial network denoising algorithm which uses a combination of three kinds of loss functions was proposed to avoid the loss of image details in the denoising process. The mean square error loss function was used to make the denoising results similar to the original images, the perceptual loss function was used to understand the image semantic information, and the adversarial learning loss function was used to make images more realistic. The algorithm used the deep residual network, the densely connected convolutional network and a wide and shallow network as the component in the replaceable module of the network. The results show that the three networks tested can make images more detailed and have better peak signal to noise ratio while removing image noise. Among them, the wide and shallow network which uses fewer layers, larger convolution kernels and more feature maps achieves the best result.
When taking pictures in low-light scene, due to the insufficient light, we are often posed to the following problem: Using short exposure setting, image tends to be dim and noise, but with a sharp outline. While using longer exposure setting, image captures more color and detail information, but with partly blurred areas. A very common situation, none of those images is good enough. Good brightness and color information are retained in long-exposure images, while sharp outlines are retained in shorter ones. In this paper, we propose a fusion method based on wavelet decomposition for such low-light image pair. In this work, we firstly decompose the original image pair into different frequency subbands. After that, we compute the importance weight maps according to the difference value between corresponding subbands. In order to refuse artifacts and ghost, we compute weight maps in Gauss model. Finally, the coefficients of subbands are blended into a high-quality fusion image. Experimental results show that the proposed method effectively preserves sharp edges of the short-exposure image, and maintains the color, brightness, and details of the long-exposure image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.