When taking pictures in low-light scene, due to the insufficient light, we are often posed to the following problem: Using short exposure setting, image tends to be dim and noise, but with a sharp outline. While using longer exposure setting, image captures more color and detail information, but with partly blurred areas. A very common situation, none of those images is good enough. Good brightness and color information are retained in long-exposure images, while sharp outlines are retained in shorter ones. In this paper, we propose a fusion method based on wavelet decomposition for such low-light image pair. In this work, we firstly decompose the original image pair into different frequency subbands. After that, we compute the importance weight maps according to the difference value between corresponding subbands. In order to refuse artifacts and ghost, we compute weight maps in Gauss model. Finally, the coefficients of subbands are blended into a high-quality fusion image. Experimental results show that the proposed method effectively preserves sharp edges of the short-exposure image, and maintains the color, brightness, and details of the long-exposure image.
A generative adversarial network denoising algorithm which uses a combination of three kinds of loss functions was proposed to avoid the loss of image details in the denoising process. The mean square error loss function was used to make the denoising results similar to the original images, the perceptual loss function was used to understand the image semantic information, and the adversarial learning loss function was used to make images more realistic. The algorithm used the deep residual network, the densely connected convolutional network and a wide and shallow network as the component in the replaceable module of the network. The results show that the three networks tested can make images more detailed and have better peak signal to noise ratio while removing image noise. Among them, the wide and shallow network which uses fewer layers, larger convolution kernels and more feature maps achieves the best result.
Proc. SPIE. 10832, Fifth Conference on Frontiers in Optical Imaging Technology and Applications
KEYWORDS: Visual process modeling, Visualization, Image analysis, Image quality, High dynamic range imaging, Image enhancement, Visibility, RGB color model
The images captured from environment often suffer from low contrast and visual quality due to the bad imaging conditions like low light or haze weather. Many methods have been proposed based on traditional image enhancement models including dehazing model and Retinex model typically. However, their scopes are limited and specific. In this paper, we propose a simple but effective method to enhance images contrast and keep the good visual quality. By observing the traditional image enhancement models including dehazing model and Retinex model, a general normalized model is proposed. To preserve the image details and control the brightness, we introduce dual boundaries called the dark and bright boundary to handle the low light and high light condition. After getting the dark and bright boundary, the images are enhanced accordingly. Experiments show our method can be applied in many bad imaging conditions and keep good performances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.