In real life, the number of light sources is limited and various objects can block out lights, thus light is usually accompanied by shadows. Therefore, the illumination is not uniform in the field of vision. A scene under nonuniform illumination tends to appear extremely bright in some regions, while some other regions succumb to the dark. However, a color image produced by electronic equipment carries not only the inherent reflectance characteristics of the scene but also the lights irradiating on it. Consequently, the image of the scene that is under nonuniform illumination usually suffers from underexposure and overexposure. To improve the degenerated details caused by nonuniform illumination, many image-enhancement algorithms have been published. These algorithms can be roughly categorized as algorithms based on the retinex theory,1 methods that apply nonlinear modification to image luminance (note that, in this paper, “illumination” refers to a triplet image with the same size of the image that is to be processed, and represents the brightness and chrominance characteristics of the light condition of the scene; “luminance” is a gray image that denotes the brightness of the illumination; “intensity” means the pixel value in a single channel of RGB space), and algorithms that are devised in some transformed space.
In the first category, retinex theory was proposed to model the visual perception mechanism of HSV. In general, a basic idea is that the perceived brightness of an object in every RGB channel is determined by the relative brightness between it and its neighbors. Accordingly, there are two critical factors during the implementation of retinex theory: how the relative brightness is computed using spatial comparisons, and how the neighbors are selected and combined. In the literature, many variants of retinex have been published. The pathwise retinex1 was first proposed, where every pixel value is reset based on a set of random piecewise linear paths. To mimic the visual surround of HVS, a large number of paths are needed around every pixel, thus leading to high computational cost.2 To cope with this, Marini and Rizzi3 replaced the random paths by Brownian paths, and Funt et al.4 implemented the random pathwise retinex on a set of subsampled images. Further, motivated by the two-dimensional (2-D) characteristic of visual surround, Provenzi et al.5 used a set of 2-D random sprays to replace one-dimensional paths. For a target pixel and a random spray around it, the ratio of the target pixel to the maximum value in the spray is computed. And finally, the target pixel is renewed by the average of these ratios. However, this random-spray retinex algorithm5 (RSR) is prone to induce white noises at large uniform regions. In addition, the enhancing effect of RSR is not obvious when the image contains highly-bright pixels. These pathwise retinex12.3.–4 and RSR5 algorithms select a set of neighbors and use the local maximum value as the referenced white points. The length of the paths and diameter of the sprays can significantly influence the results, and the optimal settings of these crucial parameters vary among different images. Recently, to reduce the white noises induced by RSR, Gianini et al.6 proposed a quantile-based implementation of retinex, where a weighted version of histogram was built at every target pixel and a quantile was determined to be a referenced white point. However, small quantiles caused the result image to be whitish, while large quantile produces inadequate enhancement for dark areas. In addition, Land7 proposed the center/sound retinex, where a target pixel (center) is renewed by its ratio to the weighted average of some neighbors (surround). Later, Jobson et al.8 refreshed the center/surround retinex with a Gaussian filter. The spatial spread scale of the Gaussian filter should be set carefully: small scale produces good details at the cost of tonal information, and large scale is not effective for contrast enhancement. To resolve this problem, Jobson et al.9,10 proposed the multiscale retinex (MSR) algorithm by averaging the output of three single-scale retinex algorithms. However, because of the isotropic characteristic of Gaussian kernel, center/surround retinex algorithms tend to induce halo artifacts at high-contrast edges.
In retinex algorithms, spatial comparisons between a target pixel and its neighbors are implemented via division. Different from retinex, in their work of automatic color equalization (ACE), Rizzi et al.11 employed the weighted average of the differences between the target pixel and neighbors to renew every target pixel. Similar to the threshold mechanism in pathwise retinex,1 the “difference” was transformed nonlinearly into a fixed range. In every RGB channel, these differences can be positive or negative, and thus local contrast is enhanced adaptively. ACE performs well on underexposure and overexposure, but is prone to corrupt large uniform areas and wash out colors. Further, to combining the advantages of ACE and RSR, Provenzi et al.12 proposed a local fusion of them, called RACE algorithm, via the 2-D random spray that was first utilized in RSR.
In the second category, to enhance details for dark and highly-bright regions, image luminance is estimated first and then modified nonlinearly. In general, image luminance can be roughly estimated based on some color space that separates the brightness/lightness from chromatic components, such as HSV and YCbCr spaces. Therefore, algorithms in this category focus on the nonlinear modification technique that is used for luminance modification. Some authors tried to adjust global image luminance via histogram modification.13,14 Although it is effective in handling global dynamic range, histogram-based algorithms ignore the spatial relations of pixels, and are prone to be affected by the spikes in image histograms.
To enhance local contrast, Tao et al.15 proposed to increase the luminance of dark pixels and decrease the luminance of highly bright pixels via an inverse sigmoid function. Note that the demarcation between underexposure and overexposure was not defined precisely in Ref. 15. After dynamic range compression, Tao et al.15 utilized the comparison mechanism in MSR to enhance local contrast in luminance channel. Finally, the enhanced color image was reconstructed by preserving the hue and saturation information of the original image. However, this color-restoration method is apt to produce excessive colors for original dark regions. Meylan and Susstrunk16 proposed to implement global luminance adaptation through a power function according to the original average luminance. Then, the local contrast is also enhanced based on the comparison mechanism of MSR. However, due to the global luminance-adaptation procedure, original highly bright areas are prone to be compressed excessively. Based on the assumption that the expected value of the enhanced luminance is half of the maximum value in the full dynamic range, Schettini et al.17 modified the traditional gamma correction via an automatic parameter-tuning technique for the gamma value. To avoid smoothing across edges, Schettini et al.17 utilized the bilateral filter18 instead of Gaussian filter, thus reducing halo artifacts. Choudhury and Medioni19 utilized the V channel in HSV space as image luminance, and designed a nonlinear transformation based on logarithmic function. Underexposure and overexposure were divided according to the proportion of pixels that are smaller than 0.5 (the full dynamic range is [0, 1]). Note that this division is fixed for the whole image, thus is not suitable for images that have very small area of underexposure or overexposure. In addition, the color-restoration procedure in Ref. 19 is similar to that of Ref. 15 by preserving original chromaticity, and this is prone to lead to exaggerated colors at dark regions.
In addition, Meylan et al.20 proposed to consecutively use two Naka–Rushton21 transformation for nonlinear adaption. Since Naka–Rushton function can increase input values, it performs well on underexposure images. Inspired by Meylan’s work,20 Wang and Luo22 intensively explored the adaption mechanism of Naka–Rushton formula and designed adaptive parameter settings for it, and obtained an effective enhancement for underexposed image. Furthermore, to enhance images with partially overexposure, the symmetric version of Naka–Rushton formula (SNRF) was proposed in Ref. 23 to pull up small intensities and pull down large intensities in RGB channels.
Recently, using the frequency of local and nonlocal neighboring pixels, Wang et al.24 proposed a bright-pass filter to estimate image luminance. Further, to preserve the lightness order while modifying luminance, they used a histogram specification technique. However, due to the preservation of lightness order, highly bright regions cannot be enhanced adequately. For image luminance estimation, Shin et al.25 implemented Gaussian smoothing on the V channel, and then combined the smoothed luminance with the original one according to gradient information to eliminate smoothing at edges. Thereafter, luminance was adjusted through gamma correction, and global contrast was enhanced through histogram modification. Since local contrast was not boosted after gamma correction, the processed results by Ref. 25 present moderate luminance but deficient contrast.
In the third category, image enhancement was implemented in some transformed space. That is to say, a certain 2-D orthogonal transform is first performed on the input image and then the transform coefficients will be modified accordingly. These types of algorithms include alpha rooting,26,27 logarithmic enhancement,28,29 and heap transform.30 In Ref. 26, Grigoryan et al. proposed a multi-frequency band alpha-rooting method, and combined it with a two-dimensional discrete quaternion Fourier transform for image enhancement. In Ref. 29, Panetta et al. summarized the logarithmic-image-processing framework for image enhancement.
In this work, to enhance images with nonuniform illumination, a local adaption approach is developed. First, to estimate image luminance, a just-noticeable-difference (JND)-based low-pass filter is devised and implemented on the channel of YCbCr space. Then, to separate underexposure from overexposure in the luminance image, a local adaptive demarcation principle is proposed. Different from the globally fixed demarcation value used by Tao et al.15 and Choudhury and Medioni,19 the proposed demarcation changes as local luminance varies, thus adapting to complicated luminance and helping to control the enhancement degree for various regions in an image. According to this adaptive demarcation, local luminance is modified by SNRF to depress highly bright areas and light up dark areas. Next, a color image will be reconstructed based on the enhanced luminance and the original chromatic information. With regard to color reconstruction, traditional methods15,19 that preserve the original chromaticity are prone to exaggerate colors at dark areas and depress colors at highly bright areas. To cope with this, the proposed color-reconstruction technique utilizes a power function of the ratio between the enhanced luminance and the original one. Finally, a compensation technique for local contrast is designed to improve visual quality of the output color image. Experimental results show that the proposed approach produces moderate details and vivid colors.
The rest of the paper is organized as follows: the proposed algorithm is detailed step by step in Sec. 2. Experimental results and comparisons are presented in Sec. 3. Finally, conclusions are given in Sec. 4.
Image Enhancement via Nonlinear Mapping
Flowchart of the proposed approach is given in Fig. 1, and a sample image with intermediate results is illustrated as well. Precisely, our method consists of four steps: (1) in channel, image luminance is estimated using a JND-based filter, which preserves high-contrast edges while implementing local smoothing; (2) to adjust local luminance adaptively, the estimated luminance is modified by SNRF to pull up underexposure and pull down overexposure; (3) for the sake of natural colors, image color is reconstructed via an exponential function based on the enhanced luminance, original luminance, and original chromatic information; (4) to compensate for the degenerated contrast after dynamic range compression by SNRF in step 2, a local contrast compensation technique is applied to the RGB channels of the produced color image.
Luminance Estimation Using Just-Noticeable-Difference-Based Filter
Accurate estimation of luminance from a color image is complicated and difficult. Based on the assumption that illumination is spatially smooth, Gaussian filter89.–10,15 was used for illumination estimation. However, these methods are prone to induce halo artifacts at high-contrast edges due to the smoothing across edges. To cope with this, many researchers turned to devise locally adaptive filters, such as bilateral filter,17,22,23 gradient-driven smoothing operators,19 and image-content-dependent smoothing techniques.16,24,25 The common starting point among these adaptive filters is to decrease the smoothing degree at high-contrast edges. In this section, we proposed an adaptive smoothing filter based on the JND of pixel values. This filter does not depend on the identification of high-contrast edges, by truncating intensities of neighbors according to JND value of the center pixel.
This work focuses on the nonuniform illumination problem, and is under an assumption that the illumination is neutral and uncolored throughout the whole image. In addition, luminance is used here to denote the brightness of illumination. Among existing color spaces, the channel in YCbCr space is used as a coarse luminance. Mathematically, the channel can be linearly transformed from RGB space by
JND refers to the value below, which any change cannot be visually perceived by the HVS.31 Lin32 did a survey on the computational models for JND including the model from pixel domain. In luminance component, luminance adaptation and texture masking were two major factors to be considered for pixel-wise JND estimation3334 32 Therefore, the pixel-wise JND will be roughly estimated by in this paper, which means
To preserve high-contrast edges while smoothing an image via a weighted average of neighbors, an intuitive idea is that a neighbor, which has a large numerical difference from the center pixel, should be carefully treated. For ease of presentation, we call the neighbors that need to be carefully treated as odd neighbors. Motivated by the intuitive idea mentioned already, bilateral filter18 decreases the weights for odd neighbors by multiplying the Gaussian kernel in spatial domain with a Gaussian kernel in the range domain. In this work, we devised an alternative method that manipulates odd neighbors to be similar values with the center pixel. Mathematically, the values of neighbors will be truncated according to the corresponding JND values of the center pixel. With regard to a center pixel, its neighbors can be divided into three categories, and their values will be truncated through
Based on the truncating mechanism in Eq. (5), image luminance can be estimated to be the local weighted average of JND-truncated neighbors. Consequently, this averaging process can be implemented by assigning spatial Gaussian weights to the JND-truncated neighbors, and the JND-based filter is defined asFigure 2 exhibits the estimated luminance by Gaussian filter, bilateral filter, and the proposed JND-based filter, respectively. As shown in Figs. 2(c)–2(f), both bilateral filter and JND-based filter preserve high-contrast edges effectively. Figures 2(e) and 2(f) are results of JND-based filter with different spatial spreads, and these two images show little difference. Apart from the spatial spread, the smoothing degree of bilateral filter is also determined by range spread. Figures 2(c) and 2(d) are results of bilateral filter with different range spreads. In Fig. 2(c), the range spread is larger than that of Fig. 2(d), resulting a smoother image.
Adaptive Modification to Luminance
Within a luminance image, underexposure can be enhanced by lighting up dark regions, and overexposure can be solved by dimming pixels that are extremely bright. Inspired by the above idea, we adopt the symmetric Naka–Rushton formula (SNRF) proposed in our previous publication23 to implement luminance modification. Precisely, SNRF is derived from the original Naka–Rushton equation21 through a symmetric transformation. The original Naka–Rushton equation was defined asFig. 3 (see two curves with legend of “Naka–Rushton” for details). In contrast, the formula that is symmetric with Naka–Rushton formula about the point (0.5, 0.5) can be used for dimming large intensities. Further, SNRF was formulated by integrating these two formulas at a point that is used as the demarcation between underexposure and overexposure. Being applied to a luminance image, SNRF was formulated as
Figure 3 also illustrates several curves of SNRF with different parameters. For concise expression, it is assumed that in Fig. 3. Comparing the two curves composed of plus and dot, we can see that larger value of leads to larger global output. In addition, with a fixed , the curve of SNRF approaches to the line “output = input” as increases. In other words, the output increases as decreases when , and the opposite situation occurs when .
To manipulate complicated luminance adaptively, the demarcation between underexposure and overexposure is set to be pixel-wise. As for SNRF, smaller demarcation leads to larger output. Therefore, is set to vary inversely with local luminance to assign more increment to darker regions. In addition, also changes contrarily to global luminance to light up globally dark images. Mathematically, the demarcation is devised to be a transformed version of the sigmoid function2.1, and is utilized to measure local luminance. Curves of Eq. (11) with different values of are illustrated in Fig. 4 where the input is . In addition, Fig. 5 exhibits a color image and its image. It is shown that original darker areas correspond to larger values, and this compels dark regions to be categorized as underexposure. In contrary, original brighter regions correspond to smaller values.
In SNRF defined by Eq. (10), works in the first case to control the adaptation degree for underexposure. Moreover, it has been shown in Fig. 3 that the output of SNRF varies inversely with . To revive underexposure regions, larger increments need to be assigned to locally darker pixels. In addition, severe underexposure also needs obvious promotion. Based on the above analysis, is formulated as
Different from , serves as the adaptation factor in the symmetric version of Naka–Rushton formula, i.e., the second case in Eq. (10), for enhancing overexposure regions. And Fig. 3 has shown that the output of SNRF changes directly with . Consequently, is set as
Substituting the elaborative parameter settings in Eqs. (11)–(13) into SNRF in Eq. (10), an adaptive luminance-modification technique is obtained for enhancing images that suffer from exposure problems. Further, to improve the global contrast of the output of SNRF, histogram of is linearly stretched into the range of [0, 1].
Color Image Reconstruction
With the modified luminance image, a color image will be constructed in this section based on the chromatic information of the original color image. At present, in many works15,19,24,25 that modify the luminance channel, the final color image was restored by strictly preserving the original chromaticity using the following mechanism:Figures 6(a) and 6(b) illustrate an input image and the reconstructed color image via Eq. (14). Without loss of generality, the modified luminance used in Fig. 6 is processed by a general gamma correction method rather than SNRF. The gamma value is set to be , and therefore pixels, which are smaller than 0.7, will be promoted and others will be dimmed. It is shown that in Fig. 6(b) the blue car and green trees in the original dark regions suffer from color distortion.
In addition Eq. (14), Ref. 17 utilized a linear combination of the ratio and the difference between and for color reconstruction. The corresponding color-reconstruction output is given in Fig. 6(c). It can be seen that the excessive color of original dark regions was alleviated, but global contrast gets worse.
Inspired by the color-restoration mechanism in Eq. (14), to alleviate the magnification of differences between RGB triplets for underexposure regions and maintain the RGB differences for overexposure regions, we propose to reconstruct the color image byFig. 6(d) where the color and global contrast are visually appropriate.
Local Contrast Compensation
During the luminance-modification process in Sec. 2.2, SNRF lights up small intensities and pulls down large intensities in the luminance image, and therefore local contrast is prone to be degenerated. In this section, local contrast will be improved to produce vivid images. The basic idea for contrast enhancement is that a pixel should be increased if it is larger than its neighbors, and should be decreased if it is smaller than its neighbors. Based on this idea, pixel-wise differences and ratios between the center pixel and its neighbors were employed by ACE11 and retinex algorithms,12 respectively. In addition, Tao et al.15 utilized the comparison mechanism in the center/surround retinex89.–10 to improve local contrast via an exponential function15, and bilateral filter18 was used later in Ref. 23 to eliminate halo artifacts. The parameter controls the enhancing degree and is set into the range of [0.5, 2] according to global standard deviation. Mathematically, if a center pixel is smaller than the weighted average of its neighbors, i.e., , the input intensity will be decreased. Otherwise, is smaller than 1, and thus the input intensity will be increased to some extent.
However, using Eq. (16) for contrast enhancement, dim areas are prone to be exaggerated and bright areas cannot be enhanced sufficiently. Figure 7(c) shows the output by implementing Eq. (16) on Fig. 7(b), which is the processed result after Sec. 2.3. In Fig. 7(c), details of the vines on the wall are excessively enhanced and almost grayed out, especially in the lower right region of the image. On the other hand, contrast between the sky and cloud is improved slightly compared to Fig. 7(b). To solve these problems, we propose to modify Eq. (16) by the symmetric formula of it and obtainFigure 7(d) shows the output by implementing Eq. (18) on Fig. 7(b). Compared to Fig. 7(c), details of the vines are revived without extreme exaggeration. Moreover, the contrast of original bright areas, e.g., sky and cloud, is also improved properly.
The proposed approach has been tested on images that suffer from underexposure, overexposure, or both problems. Comparisons have been made with some classic algorithms, including the multiscale retinex algorithm MSRCR,10 RACE,12 which is a locally fused version of RSR5 and ACE11 algorithms, and an algorithm20 that used Naka–Rushton formula for tone mapping. Moreover, the proposed approach has also been compared with several recently proposed algorithms: algorithms that emphasized naturalness preservation,24,25,35 the original SNRF algorithm,23 and the algorithm using alpha rooting.26 Performances of image-enhancement algorithms can be evaluated via some objective measures26,36,37 that take colorfulness, contrast, or sharpness into account. In this paper, the color-image-enhancement (EMEC) measure26 and the color-quality-enhancement (CQE) measure37 will be used for facilitating comparisons between different enhancing results.
Enhancing Results for Underexposure Images
Figure 8(a) was provided by Gehler et al.,39 and it shows a scene where the foreground is underexposed and the background is exposed properly. In Fig. 8, the CQE and EMEC values are listed below every image. As shown in Fig. 8(b), MSRCR brings good contrast, thus having large CQE and EMEC values. However, it suffers from halo artifacts, such as the edge between the trunk and background meadowland. In Fig. 8(c), the output color by RACE has been grayed out slightly because RGB channels were processed separately. In Fig. 8(d), Meylan’s algorithm20 sacrifices details in the bright background because global luminance has been increased via Naka–Rushton formula. Fortunately, because details in original dark regions have been revived, Fig. 8(d) has relatively large CQE and EMEC values. Figure 8(e) gives the output of Ref. 24, and the original dark foreground has been revealed clearly. However, observing the MacBeth-color-checker board in the lower right corner, color blocks are corrupted at their edges. Due to the corruption of small/slight edges, Fig. 8(e) does not possess large CQE and EMEC values. Figure 8(f) shows the result of Ref. 25. However, details of the trunk are still buried in underexposure, and contrast of the background is degenerated. This is mainly because that original luminance order is strictly preserved. As illustrated in Fig. 8(g), the algorithm in Ref. 35 produces natural background but dark foreground, because the saliency map of the input image affects the output seriously. Note that, Fig. 8(g) shows good global contrast, thus having large CQE and EMEC values. As the output of Ref. 23, Fig. 8(h) shows clear details. However, similar to Fig. 8(b), color shift occurs at the stones and soil at the bottom of the image. Figure 8(i) exhibits the result of the proposed algorithm, where details of the foreground are revived clearly and the background is protected from being washed out. Although the CQE and EMEC values of Fig. 8(i) are slightly smaller than Fig. 8(g), the details in Fig. 8(i) are better than Fig. 8(g), such as the tree trunks.
For further comparisons, Fig. 9 shows the zoomed-in partial regions cropped from Fig. 8, and the CQE and EMEC values of these partial regions are also listed below the images in Fig. 9. Comparing Figs. 9(g) with 9(i), their CQE and EMEC values are comparative, but image details are more clear in Fig. 9(i).
Enhancing Results for Overexposure Images
Figure 10(a) shows an image where the color is slightly washed out and contrast is depressed due to overexposure. As demonstrated in Figs. 10(b) and 10(c), MSRCR and RACE algorithms enhance image contrast slightly, but the global luminance is still quite bright. Correspondingly, the CQE and EMEC values in Figs. 10(b) and 10(c) are slightly larger than Fig. 10(a). Output of Ref. 20 is shown in Fig. 10(d), and it is rather overexposed because the Naka–Rushton formula has been used to pull up pixel intensities globally. Figure 10(e) gives the output of Ref. 24, and overexposure has not been solved effectively, which also can be reflected by the CQE and EMEC values. Shin et al.25 applied a histogram-equalization procedure to the modified probability density function of an image after gamma correction, thus obtaining proper luminance in Fig. 10(f). However, they used the method in Eq. (14) for color reconstruction, and therefore image color is depressed. Consequently, the CQE and EMEC values of Fig. 10(f) are still close to that of Fig. 10(a). As shown in Fig. 10(g), the algorithm in Ref. 35 modifies luminance effectively and produces good contrast, but image color is dim because of the same reason with Fig. 10(f). Reference 23 applied SNRF separately to RGB channels, and the result is given in Fig. 10(h). It can be seen that global luminance has been modified properly, but image contrast still needs improvement. At last, output of the proposed algorithm is given in Fig. 10(i), which shows proper luminance and promising contrast. Compared with Figs. 10(b)–10(e), the luminance in Fig. 10(i) is more suitable. Moreover, compared with Figs. 10(f) and 10(g), which also have proper luminance, Fig. 10(i) shows more vivid colors. Therefore, the CQE and EMEC values of Fig. 10(i) are larger than others in Fig. 10.
Figure 11(a) demonstrates an image where contrast is degenerated due to overexposure. As shown in Figs. 11(b) and 11(c), local contrast is improved slightly by MSRCR and RACE. The results of Refs. 20 and 24 are illustrated in Figs. 11(d) and 11(e) where the enhancing effects are not visually obvious. Consequently, the CQE and EMEC values of Figs. 11(b)–11(e) are not much larger than Fig. 11(a). Compared with Figs. 11(d) and 11(e), the luminance of the last four images in Fig. 11 is proper. However, the colors of Figs. 11(f) and 11(g) are depressed due to their color-reconstruction mechanism and tend to dim. In Fig. 11(h), local contrast still needs to be improved, such as the contrast between the sky and clouds. Figure 11(i) shows the result of the proposed approach, and the image shows clear details and good contrast without color distortion. It is shown that the CQE values of Figs. 11(g) and 11(i) are comparative, and Fig. 11(g) has larger EMEC value than Fig. 11(i). This is because that Ref. 35 dimmed the image obviously and obtains good local contrast in Fig. 11(g). However, image color is also dimmed excessively in Fig. 11(g) and global contrast is sacrificed. For instance, Figs. 12(a) and 12(b) are cropped from Figs. 11(g) and 11(i), respectively. We can see that the contrast in Fig. 12(b) is better.
Enhancing Results for Images with Both Underexposure and Overexposure
Figure 13(a) shows an image with “a car in the sunset,” and is gained by courtesy of Greenspun.40 The car is underexposed and the sunset area is overexposed. Output of MSRCR is given in Fig. 13(b) where details of the car are revealed clearly. However, the forehead of the car suffers from halo artifacts. As shown in Fig. 13(c), RACE promotes image details in the dark regions, whereas image color is corrupted because RGB channels are treated separately. The output of Ref. 20 is illustrated in Fig. 13(d) where the sunset area is degenerated because Naka–Rushton formula always increases input intensities. Figure 13(e) shows the result of Ref. 24, and global details have been improved effectively. However, luminance of original dark areas has been increased excessively, such as the bottom of the car. The result of Ref. 25 is given in Fig. 13(f) where image luminance has been adapted successfully. However, image contrast still needs to be enhanced. For example, the contrast between sky and cloud is rather inferior to the original image in Fig. 13(a). In Fig. 13(g), image luminance is moderate, but original dark regions under the car are exaggerated. In addition, global image color is distorted and slightly tends to red, such as the sky. Fig. 13(h) shows the output of Ref. 23, and it shows clear details and good contrast. However, image color is distorted because Ref. 23 applied SNRF formula to RGB channels in parallel. For instance, the original red car slightly turns blue. Output of the proposed approach is illustrated in Fig. 13(i). Compared with the previous results in Fig. 13, Fig. 13(i) shows clear details and good contrast without halo artifacts and color distortion. Moreover, the CQE value of Fig. 13(i) is larger than other images in Fig. 13. In addition, Fig. 13(i) has the second-largest EMEC value among the images in Fig. 13, and its EMEC value is a little smaller than Fig. 13(d). We can notice that the clouds around the “sun set” area has been mainly washed out in Fig. 13(d).
Figure 14(a) shows a scene that contains a spacecraft. The bottom of the spacecraft is dark, and details of other areas are slightly washed out due to overexposure. Output of MSRCR is given in Fig. 14(b), and it shows good global contrast. However, the original dark regions suffer from halo artifacts. For instance, edges of the original dark bottom of the spacecraft are still dark and other regions of the bottom are revealed. Figure 14(h) shows the result of an alpha-rooting algorithm,26 and it has relatively large CQE value. Among the images in Fig. 14, the outputs of Ref. 35 and the proposed algorithm are more satisfying. In detail, Fig. 14(g) has the largest CQE value, and Fig. 14(i) has the largest EMEC value. For detailed comparison, partial regions of Figs. 14(g) and 14(i) are illustrated in Fig. 15. It is shown in Fig. 15(b) that the bottom of the spacecraft has been reviewed more clearly by the proposed algorithm than Ref. 35.
Contrast Enhancement for Images with Normal Exposure
Performance of the proposed approach and several algorithms on under or overexposure images have been compared in previous sections. In fact, when dealing with an image, image processing algorithms do not know which exposure problem the input image has. Consequently, even if the input image has proper luminance, it will be modified as well. Therefore, in this section, images with proper luminance are utilized as the input images show the performances of different image-enhancement algorithms.
A colorful scenery taken under natural light is shown in Fig. 16(a) where image luminance is proper and local contrast needs to be enhanced. As shown in Figs. 16(b) and 16(c), MSRCR and RACE improve local contrast slightly and obtains natural color. Correspondingly, the EMEC values of Figs. 16(b) and 16(c) are larger than Fig. 16(a). The results of Refs. 20 and 24 are shown in Figs. 16(d) and 16(e), respectively. However, local and global contrast of Figs. 16(d) and 16(e) is even inferior to the original image in Fig. 16(a). Therefore, the CQE and EMEC values of Figs. 16(d) and 16(e) are small. The algorithm in Ref. 25 improves image contrast effectively, and the output is given in Fig. 16(f). However, image color in Fig. 16(f) is slightly dim, such as the distant forest at the top-left part. The result of Ref. 35 is given in Fig. 16(g), which has comparable contrast with Fig. 16(c). The output of Ref. 23 is shown in Fig. 16(h) where image contrast is unsatisfactory and image color turns slightly gray. This is because that Ref. 23 implements SNRF for RGB channels separately and the dynamic rang is compressed seriously by SNRF for images with normal luminance. Figure 16(i) gives the result of the proposed algorithm. Visually, it shows better contrast and more vivid color when compared with Figs. 16(b)–16(h). Moreover, the CQE and EMEC values of Fig. 16(i) are much larger than other images in Fig. 16, and this confirms the effectiveness of the proposed approach.
Figure 17(a) shows a blond white woman wearing a red hat and scarf, and the dominant color of the image is red. The result of MSRCR is given in Fig. 17(b), where color distortion occurs. For instance, the hair and face, which are surrounded by red hat and scarf, are slightly green. As illustrated in Fig. 17(c), RACE improves image contrast slightly. Consequently, the CQE and EMEC values of Figs. 17(b) and 17(c) are not quite larger than Fig. 17(a). The results of Refs. 20 and 24 are given in Figs. 17(d) and 17(e), and both of the contrast in these two images need to be improved. As shown in Fig. 17(f), the method in Ref. 25 enhances contrast effectively. However, Fig. 17(f) tends to darken slightly, such as the eyes and red scarf. Figure 17(g) exhibits the output of Ref. 35, and it is rather dark globally. Figure 17(h) shows the result of Ref. 23, and the image color is washed out globally due to the separate treatment of RGB channels. Output of the proposed approach is given in Fig. 17(i). According to the CQE and EMEC measures, the proposed algorithm produces better result than other methods that are involved in Fig. 17. In detail, compared with the previous results in Figs. 17(b)–17(f), the proposed output shows better contrast and color. For example, the face in Fig. 17(i) is much clearer, and the textures on the hat and scarf are enhanced more effectively than others.
In this work, we presented a local enhancement approach for nonuniform illumination images where details are corrupted by underexposure or overexposure. The proposed approach modifies the luminance component of an image to light up underexposure and darken overexposure. First, to estimate the luminance component, pixel-wise JND values are integrated with the Gaussian filter to preserve edges while smoothing the channel in YCbCr space. Then, to discriminate between underexposure and overexposure, a pixel-wise demarcation is devised based on local and global luminance levels. For luminance modification, SNRF is employed to increase underexposure pixels and decrease overexposure pixels. Next, to reconstruct a natural color image, an exponential technique is formulated to combine the modified luminance with the original RGB components. Finally, to improve local contrast that is prone to be degenerated through luminance modification, a local-image-dependent exponential method is designed and applied to the reconstructed color image.
To validate the effectiveness of the proposed approach, experimental tests were made on four types of images: underexposure images, overexposure images, images with both underexposure and overexposure, and images with normal exposure. Moreover, comparisons were made between the proposed method and other solutions, including retinex-based algorithms (MSRCR,10 RACE12), some recent algorithms24,25,35 for tone mapping, an alpha-rooting method26 and algorithms20,23 related to SNRF. Comparisons between experimental results show that the proposed algorithm has the merit of good contrast and vivid color for enhancing nonuniform illumination images. In addition, comparisons also demonstrate that the proposed algorithm enhances contrast more effectively for images with normal exposure.
The project was supported by the funding from the National Natural Science Foundation of China (Grant Nos. 61502145, 61300122, and 61602065). This work was also supported in part by the Fundamental Research Funds for the Central Universities under Grant No. 2017B42214. We are very grateful to E. Provenzi and M. Fierro for sending us the code of RACE. We would also like to thank Laurence Meylan, Shuhang Wang, Sangkeun Lee, and Yuecheng Li for making their codes available online.
Yanfang Wang is a lecturer at the College of Computer and Information, Hohai University, China. She received her BE degree from Northwestern Polytechnical University in 2008 and her PhD in control science and engineering from Tsinghua University in 2014. Her current research interests include color image enhancement, color constancy, image filtering, and color-image quality assessment.
Qian Huang is an associate professor at the College of Computer and Information, Hohai University, China. He received his BS degree from Nanjing University in 2003 and his PhD in computer science and technology from Graduate University of Chinese Academy of Sciences in 2010. His current research interests include video processing, cloud computing, and machine learning.
Jing Hu is a lecturer at the Department of Computer Science, Chengdu University of Information Technology. She received her BE degree from the University of Electronic Science and Technology of China in 2009 and her PhD in control science and engineering from Tsinghua University in 2015. Her current research interests include image superresolution, image denoising, image enhancement, and magnetic resonance imaging.