Adaptive enhancement for nonuniform illumination images via nonlinear mapping

Abstract. Nonuniform illumination images suffer from degenerated details because of underexposure, overexposure, or a combination of both. To improve the visual quality of color images, underexposure regions should be lightened, whereas overexposure areas need to be dimmed properly. However, discriminating between underexposure and overexposure is troublesome. Compared with traditional methods that produce a fixed demarcation value throughout an image, the proposed demarcation changes as local luminance varies, thus is suitable for manipulating complicated illumination. Based on this locally adaptive demarcation, a nonlinear modification is applied to image luminance. Further, with the modified luminance, we propose a nonlinear process to reconstruct a luminance-enhanced color image. For every pixel, this nonlinear process takes the luminance change and the original chromaticity into account, thus trying to avoid exaggerated colors at dark areas and depressed colors at highly bright regions. Finally, to improve image contrast, a local and image-dependent exponential technique is designed and applied to the RGB channels of the obtained color image. Experimental results demonstrate that our method produces good contrast and vivid color for both nonuniform illumination images and images with normal illumination.


Introduction
In real life, the number of light sources is limited and various objects can block out lights, thus light is usually accompanied by shadows.Therefore, the illumination is not uniform in the field of vision.A scene under nonuniform illumination tends to appear extremely bright in some regions, while some other regions succumb to the dark.However, a color image produced by electronic equipment carries not only the inherent reflectance characteristics of the scene but also the lights irradiating on it.Consequently, the image of the scene that is under nonuniform illumination usually suffers from underexposure and overexposure.To improve the degenerated details caused by nonuniform illumination, many imageenhancement algorithms have been published.These algorithms can be roughly categorized as algorithms based on the retinex theory, 1 methods that apply nonlinear modification to image luminance (note that, in this paper, "illumination" refers to a triplet image with the same size of the image that is to be processed, and represents the brightness and chrominance characteristics of the light condition of the scene; "luminance" is a gray image that denotes the brightness of the illumination; "intensity" means the pixel value in a single channel of RGB space), and algorithms that are devised in some transformed space.
In the first category, retinex theory was proposed to model the visual perception mechanism of HSV.In general, a basic idea is that the perceived brightness of an object in every RGB channel is determined by the relative brightness between it and its neighbors.Accordingly, there are two critical factors during the implementation of retinex theory: how the relative brightness is computed using spatial comparisons, and how the neighbors are selected and combined.In the literature, many variants of retinex have been published.The pathwise retinex 1 was first proposed, where every pixel value is reset based on a set of random piecewise linear paths.To mimic the visual surround of HVS, a large number of paths are needed around every pixel, thus leading to high computational cost. 2 To cope with this, Marini and Rizzi 3 replaced the random paths by Brownian paths, and Funt et al. 4 implemented the random pathwise retinex on a set of subsampled images.Further, motivated by the two-dimensional (2-D) characteristic of visual surround, Provenzi et al. 5 used a set of 2-D random sprays to replace one-dimensional paths.For a target pixel and a random spray around it, the ratio of the target pixel to the maximum value in the spray is computed.And finally, the target pixel is renewed by the average of these ratios.However, this random-spray retinex algorithm 5 (RSR) is prone to induce white noises at large uniform regions.In addition, the enhancing effect of RSR is not obvious when the image contains highly-bright pixels.These pathwise retinex [1][2][3][4] and RSR 5 algorithms select a set of neighbors and use the local maximum value as the referenced white points.The length of the paths and diameter of the sprays can significantly influence the results, and the optimal settings of these crucial parameters vary among different images.Recently, to reduce the white noises induced by RSR, Gianini et al. 6 proposed a quantile-based implementation of retinex, where a weighted version of histogram was built at every target pixel and a quantile was determined to be a referenced white point.However, small quantiles caused the result image to be whitish, while large quantile produces inadequate enhancement for dark areas.In addition, Land 7 proposed the center/ sound retinex, where a target pixel (center) is renewed by its ratio to the weighted average of some neighbors (surround).Later, Jobson et al. 8 refreshed the center/surround retinex with a Gaussian filter.The spatial spread scale of the Gaussian filter should be set carefully: small scale produces good details at the cost of tonal information, and large scale is not effective for contrast enhancement.To resolve this problem, Jobson et al. 9,10 proposed the multiscale retinex (MSR) algorithm by averaging the output of three singlescale retinex algorithms.However, because of the isotropic characteristic of Gaussian kernel, center/surround retinex algorithms tend to induce halo artifacts at high-contrast edges.
In retinex algorithms, spatial comparisons between a target pixel and its neighbors are implemented via division.Different from retinex, in their work of automatic color equalization (ACE), Rizzi et al. 11 employed the weighted average of the differences between the target pixel and neighbors to renew every target pixel.Similar to the threshold mechanism in pathwise retinex, 1 the "difference" was transformed nonlinearly into a fixed range.In every RGB channel, these differences can be positive or negative, and thus local contrast is enhanced adaptively.ACE performs well on underexposure and overexposure, but is prone to corrupt large uniform areas and wash out colors.Further, to combining the advantages of ACE and RSR, Provenzi et al. 12 proposed a local fusion of them, called RACE algorithm, via the 2-D random spray that was first utilized in RSR.
In the second category, to enhance details for dark and highly-bright regions, image luminance is estimated first and then modified nonlinearly.In general, image luminance can be roughly estimated based on some color space that separates the brightness/lightness from chromatic components, such as HSV and YCbCr spaces.Therefore, algorithms in this category focus on the nonlinear modification technique that is used for luminance modification.Some authors tried to adjust global image luminance via histogram modification. 13,14Although it is effective in handling global dynamic range, histogram-based algorithms ignore the spatial relations of pixels, and are prone to be affected by the spikes in image histograms.
To enhance local contrast, Tao et al. 15 proposed to increase the luminance of dark pixels and decrease the luminance of highly bright pixels via an inverse sigmoid function.Note that the demarcation between underexposure and overexposure was not defined precisely in Ref. 15.After dynamic range compression, Tao et al. 15 utilized the comparison mechanism in MSR to enhance local contrast in luminance channel.Finally, the enhanced color image was reconstructed by preserving the hue and saturation information of the original image.However, this color-restoration method is apt to produce excessive colors for original dark regions.Meylan and Susstrunk 16 proposed to implement global luminance adaptation through a power function according to the original average luminance.Then, the local contrast is also enhanced based on the comparison mechanism of MSR.However, due to the global luminance-adaptation procedure, original highly bright areas are prone to be compressed excessively.Based on the assumption that the expected value of the enhanced luminance is half of the maximum value in the full dynamic range, Schettini et al. 17 modified the traditional gamma correction via an automatic parameter-tuning technique for the gamma value.To avoid smoothing across edges, Schettini et al. 17 utilized the bilateral filter 18 instead of Gaussian filter, thus reducing halo artifacts.Choudhury and Medioni 19 utilized the V channel in HSV space as image luminance, and designed a nonlinear transformation based on logarithmic function.Underexposure and overexposure were divided according to the proportion of pixels that are smaller than 0.5 (the full dynamic range is [0, 1]).Note that this division is fixed for the whole image, thus is not suitable for images that have very small area of underexposure or overexposure.In addition, the color-restoration procedure in Ref. 19 is similar to that of Ref. 15 by preserving original chromaticity, and this is prone to lead to exaggerated colors at dark regions.
In addition, Meylan et al. 20 proposed to consecutively use two Naka-Rushton 21 transformation for nonlinear adaption.Since Naka-Rushton function can increase input values, it performs well on underexposure images.Inspired by Meylan's work, 20 Wang and Luo 22 intensively explored the adaption mechanism of Naka-Rushton formula and designed adaptive parameter settings for it, and obtained an effective enhancement for underexposed image.Furthermore, to enhance images with partially overexposure, the symmetric version of Naka-Rushton formula (SNRF) was proposed in Ref. 23 to pull up small intensities and pull down large intensities in RGB channels.
Recently, using the frequency of local and nonlocal neighboring pixels, Wang et al. 24 proposed a bright-pass filter to estimate image luminance.Further, to preserve the lightness order while modifying luminance, they used a histogram specification technique.However, due to the preservation of lightness order, highly bright regions cannot be enhanced adequately.For image luminance estimation, Shin et al. 25 implemented Gaussian smoothing on the V channel, and then combined the smoothed luminance with the original one according to gradient information to eliminate smoothing at edges.Thereafter, luminance was adjusted through gamma correction, and global contrast was enhanced through histogram modification.Since local contrast was not boosted after gamma correction, the processed results by Ref. 25 present moderate luminance but deficient contrast.
In the third category, image enhancement was implemented in some transformed space.That is to say, a certain 2-D orthogonal transform is first performed on the input image and then the transform coefficients will be modified accordingly.These types of algorithms include alpha rooting, 26,27 logarithmic enhancement, 28,29 and heap transform. 30 In this work, to enhance images with nonuniform illumination, a local adaption approach is developed.First, to estimate image luminance, a just-noticeable-difference (JND)-based low-pass filter is devised and implemented on the Y channel of YCbCr space.Then, to separate underexposure from overexposure in the luminance image, a local adaptive demarcation principle is proposed.Different from the globally fixed demarcation value used by Tao et al. 15 and Choudhury and Medioni, 19 the proposed demarcation changes as local luminance varies, thus adapting to complicated luminance and helping to control the enhancement degree for various regions in an image.According to this adaptive demarcation, local luminance is modified by SNRF to depress highly bright areas and light up dark areas.Next, a color image will be reconstructed based on the enhanced luminance and the original chromatic information.With regard to color reconstruction, traditional methods 15,19 that preserve the original chromaticity are prone to exaggerate colors at dark areas and depress colors at highly bright areas.To cope with this, the proposed color-reconstruction technique utilizes a power function of the ratio between the enhanced luminance and the original one.Finally, a compensation technique for local contrast is designed to improve visual quality of the output color image.Experimental results show that the proposed approach produces moderate details and vivid colors.
The rest of the paper is organized as follows: the proposed algorithm is detailed step by step in Sec. 2. Experimental results and comparisons are presented in Sec. 3. Finally, conclusions are given in Sec. 4.

Image Enhancement via Nonlinear Mapping
Flowchart of the proposed approach is given in Fig. 1, and a sample image with intermediate results is illustrated as well.Precisely, our method consists of four steps: (1) in Y channel, image luminance is estimated using a JND-based filter, which preserves high-contrast edges while implementing local smoothing; (2) to adjust local luminance adaptively, the estimated luminance is modified by SNRF to pull up underexposure and pull down overexposure; (3) for the sake of natural colors, image color is reconstructed via an exponential function based on the enhanced luminance, original luminance, and original chromatic information; (4) to compensate for the degenerated contrast after dynamic range compression by SNRF in step 2, a local contrast compensation technique is applied to the RGB channels of the produced color image.

Luminance Estimation Using Just-Noticeable-
Difference-Based Filter Accurate estimation of luminance from a color image is complicated and difficult.Based on the assumption that illumination is spatially smooth, Gaussian filter [8][9][10]15 was used for illumination estimation. Howver, these methods are prone to induce halo artifacts at high-contrast edges due to the smoothing across edges.To cope with this, many researchers turned to devise locally adaptive filters, such as bilateral filter, 17,22,23 gradient-driven smoothing operators, 19 and image-content-dependent smoothing techniques.16,24,25 The common starting point among these adaptive filters is to decrease the smoothing degree at high-contrast edges.In this section, we proposed an adaptive smoothing filter based on the JND of pixel values.This filter does not depend on the identification of high-contrast edges, by truncating intensities of neighbors according to JND value of the center pixel.
This work focuses on the nonuniform illumination problem, and is under an assumption that the illumination is neutral and uncolored throughout the whole image.In addition, luminance is used here to denote the brightness of illumination.Among existing color spaces, the Y channel in YCbCr space is used as a coarse luminance.Mathematically, the Y channel can be linearly transformed from RGB space by E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 3 2 6 ; 5 1 0 Yði; jÞ ¼ 0.2989Rði; jÞ þ 0.578Gði; jÞ þ 0.114Bði; jÞ; (1)   where R, G, and B are the intensities of RGB channels, respectively, and ði; jÞ represents pixel location in the two-dimensional spatial space of an image.
JND refers to the value below, which any change cannot be visually perceived by the HVS. 31 Lin 32 did a survey on the computational models for JND including the model from pixel domain.In luminance component, luminance adaptation and texture masking were two major factors to be considered for pixel-wise JND estimation 33 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 3 2 6 ; 3 8 0 where Lði; jÞ denotes the background luminance of the pixel located at ði; jÞ, and can be computed by locally averaging operation within a small (e.g., 3 × 3) neighborhood.Note that, Eq. ( 3) implies that pixel intensities of a gray image are in the range of [0, 255].In Eq. ( 2), apart from T L ði; jÞ, the texture masking factor T t ði; jÞ can be estimated after smooth, edge, and texture regions have been discriminated.For simplification, lightness adaptation can be considered as the dominant factor in JND estimation. 32Therefore, the pixel-wise JND will be roughly estimated by T L ði; jÞ in this paper, which means ; t e m p : i n t r a l i n k -; e 0 0 4 ; 6 3 ; 5 5 8 P JND ði; jÞ ≈ T L ði; jÞ: To preserve high-contrast edges while smoothing an image via a weighted average of neighbors, an intuitive idea is that a neighbor, which has a large numerical difference from the center pixel, should be carefully treated.For ease of presentation, we call the neighbors that need to be carefully treated as odd neighbors.Motivated by the intuitive idea mentioned already, bilateral filter 18 decreases the weights for odd neighbors by multiplying the Gaussian kernel in spatial domain with a Gaussian kernel in the range domain.In this work, we devised an alternative method that manipulates odd neighbors to be similar values with the center pixel.Mathematically, the values of neighbors will be truncated according to the corresponding JND values of the center pixel.With regard to a center pixel, its neighbors can be divided into three categories, and their values will be truncated through E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 6 3 ; 3 5 0 QðB; AÞ ¼ where A denotes the luminance value of a center pixel, and B represents the luminance value of a neighbor around pixel A. The notation QðB; AÞ is the truncated output for a pixel with the value of B when it acts as a neighbor of a pixel A. In addition, P JND ðAÞ is the corresponding JND value of pixel A, and is roughly computed using Eq. ( 4).
Based on the truncating mechanism in Eq. ( 5), image luminance can be estimated to be the local weighted average of JND-truncated neighbors.Consequently, this averaging process can be implemented by assigning spatial Gaussian weights to the JND-truncated neighbors, and the JNDbased filter is defined as where σ s denotes the spatial spread of Gaussian kernel.According to Eqs. ( 6) and ( 7), the estimated luminance of pixel ði; jÞ equals to the Gaussian weighted average of its JND-truncated neighbors.Note that, the JND-truncated neighbors have similar values to the center pixel based on Eq. ( 5).Therefore, the proposed JND-based filter can preserve edges while smoothing an image.Figure 2   Fig. 2(c), the range spread is larger than that of Fig. 2(d), resulting a smoother image.

Adaptive Modification to Luminance
Within a luminance image, underexposure can be enhanced by lighting up dark regions, and overexposure can be solved by dimming pixels that are extremely bright.Inspired by the above idea, we adopt the symmetric Naka-Rushton formula (SNRF) proposed in our previous publication 23 to implement luminance modification.Precisely, SNRF is derived from the original Naka-Rushton equation 21 through a symmetric transformation.The original Naka-Rushton equation was defined as where L and V denote the input and output signal, respectively.The parameter H controls the degree of adaptation.
For an image, the normalized Naka-Rushton equation can be expressed as ; t e m p : i n t r a l i n k -; e 0 0 9 ; 6 3 ; 5 2 0 L N ði; jÞ ¼

Lði; jÞ
where ði; jÞ denotes pixel location, and L Max is the maximum intensity in the image L. Note that for a standard 24-bit color image, pixel intensity is in the range of [0, 255].When using Eq. ( 9), pixel intensity needs to be rescaled into the range of [0, 1] by the division L∕255.Being applied to an input with the range of [0, 1], the normalized Naka-Rushton formula has a upper convex curve, which implies that this formula can be utilized for lighting up underexposure regions or pixels with small intensities.An illustrative example is shown in Fig. 3 (see two curves with legend of "Naka-Rushton" for details).In contrast, the formula that is symmetric with Naka-Rushton formula about the point (0.5, 0.5) can be used for dimming large intensities.Further, SNRF was formulated by integrating these two formulas at a point that is used as the demarcation between underexposure and overexposure.Being applied to a luminance image, SNRF was formulated as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 0 ; 3 2 6 ; 7 1 9 where Y is the original luminance image and Y sym is the modified luminance by SNRF.The parameter T represents the pixel-wise demarcation between underexposure and overexposure.The other two parameters, namely H low and H high , control the degree of adaptation produced by SNRF.
For saving space, the same location index ði; jÞ was omitted from Y, Y sym , T, H low , and H high in Eq. (10).A pixel Yði; jÞ that is smaller than the corresponding demarcation Tði; jÞ is treated to be underexposure.Otherwise, it is deemed to be overexposure.Figure 3 also illustrates several curves of SNRF with different parameters.For concise expression, it is assumed that H low ¼ H high ¼ H in Fig. 3. Comparing the two curves composed of plus and dot, we can see that larger value of T leads to larger global output.In addition, with a fixed T ¼ 0.6, the curve of SNRF approaches to the line "output = input" as H increases.In other words, the output Y sym increases as H decreases when Y < T, and the opposite situation occurs when Y > T.
To manipulate complicated luminance adaptively, the demarcation T between underexposure and overexposure is set to be pixel-wise.As for SNRF, smaller demarcation T leads to larger output.Therefore, T is set to vary inversely with local luminance to assign more increment to darker regions.In addition, T also changes contrarily to global luminance to light up globally dark images.Mathematically, the demarcation T is devised to be a transformed version of the sigmoid function E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 3 2 6 ; 3 6 0

Tði; jÞ
where Y median is the median intensity value in the input luminance image Y, and is used to represent global luminance.
The notation Y JND is the estimated luminance by JNDbased filter in Sec.2.1, and is utilized to measure local luminance.Curves of Eq. ( 11) with different values of Y median are illustrated in Fig. 4 where the input is Y JND ði; jÞ.In addition, Fig. 5 exhibits a color image and its T image.It is shown that Fig. 3 Curves with different parameters.The curves with the legend of "Naka-Rushton" are those for Eq. ( 9).The curves with the legend of "SNRF" are those for Eq. ( 10).Fig. 4 Curves of Eq. ( 11) with different Y median .
original darker areas correspond to larger T values, and this compels dark regions to be categorized as underexposure.In contrary, original brighter regions correspond to smaller T values.
In SNRF defined by Eq. ( 10), H low works in the first case to control the adaptation degree for underexposure.Moreover, it has been shown in Fig. 3 that the output of SNRF varies inversely with H low .To revive underexposure regions, larger increments need to be assigned to locally darker pixels.In addition, severe underexposure also needs obvious promotion.Based on the above analysis, H low is formulated as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 6 3 ; 4 5 0 H low ði; jÞ ¼ Y JND ði; jÞ þ 0.5Y mlow ; where Y JND represents local luminance.The notation Y mlow denotes the mean value of pixels that are categorized to be underexposure by the pixel-wise demarcation Tði; jÞ.
Different from H low , H high serves as the adaptation factor in the symmetric version of Naka-Rushton formula, i.e., the second case in Eq. (10), for enhancing overexposure regions.And Fig. 3 has shown that the output of SNRF changes directly with H high .Consequently, H high is set as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 3 ; 6 3 ; 3 2 9 H high ði; jÞ where Y mhigh represents the mean value of pixels that are categorized to be overexposure by the pixel-wise demarcation Tði; jÞ.The value of Y mhigh gets larger if the overexposure regions are brighter, and then obvious decrement will be assigned to overexposure regions due to smaller H high values.
Substituting the elaborative parameter settings in Eqs. ( 11)-( 13) into SNRF in Eq. ( 10), an adaptive luminance-modification technique is obtained for enhancing images that suffer from exposure problems.Further, to improve the global contrast of the output of SNRF, histogram of Y sym is linearly stretched into the range of [0, 1].

Color Image Reconstruction
With the modified luminance image, a color image will be constructed in this section based on the chromatic information of the original color image.At present, in many works 15,19,24,25 that modify the luminance channel, the final color image was restored by strictly preserving the original chromaticity using the following mechanism: where Y 0 denotes the modified luminance and Y is the original luminance.Notations R, G, and B represent the RGB channels of the original color image, and R 0 , G 0 , and B 0 are the reconstructed RGB channels.However, the colorconstruction process in Eq. ( 14) is prone to exaggerate the color for underexposure regions, and depress the color of overexposure regions.Consider an underexposure pixel ði; jÞ in the original color image, its luminance has been increased and we have Y 0 ði; jÞ > Yði; jÞ.Mathematically, the differences among the RGB triplet [Rði; jÞ, Gði; jÞ, Bði; jÞ] will be enlarged by Eq. ( 14).Thus, compared to the original color image, the chroma of ði; jÞ is magnified and the color appearance is exaggerated.Figures 6(a) and 6(b) illustrate an input image and the reconstructed color image via Eq.( 14).Without loss of generality, the modified luminance Y 0 used in Fig. 6 is processed by a general gamma correction method rather than SNRF.The gamma value is set to be 1 þ Yði; jÞ − 0.7, and therefore pixels, which are smaller than 0.7, will be promoted and others will be dimmed.It is shown that in Fig. 6(b) the blue car and green trees in the original dark regions suffer from color distortion.In addition Eq. ( 14), Ref. 17 utilized a linear combination of the ratio and the difference between Y 0 and Y for color reconstruction.The corresponding color-reconstruction output is given in Fig. 6(c).It can be seen that the excessive color of original dark regions was alleviated, but global contrast gets worse.
Inspired by the color-restoration mechanism in Eq. ( 14), to alleviate the magnification of differences between RGB triplets for underexposure regions and maintain the RGB differences for overexposure regions, we propose to reconstruct the color image by  where the operator sqrtð•Þ denotes the extraction of square root.Compared with Eq. ( 14), the coefficients for RGB components are revised by an adaptive exponent in Eq. ( 15).For underexposure regions, we have Y 0 > Y, and the corresponding coefficients in Eq. ( 15) change inversely to R, G, and B values.In this case, for a pixel with the triplet ðr; g; bÞ, if r > g > b, we can obtain the inequality that 1 < ðY 0 ∕YÞ 1−sqrtðrÞ < ðY 0 ∕YÞ 1−sqrtðgÞ < ðY 0 ∕YÞ 1−sqrtðbÞ , and thus the output color will not be excessively exaggerated.
On the contrary, for overexposure regions, we have Y 0 < Y, and the corresponding coefficients in Eq. ( 15) have synchrony changes to R, G, and B values.Therefore, according to the above derivation, the reconstructed color for overexposure regions will be revived.Result of the color-reconstruction method in Eq. ( 15) is illustrated in   Fig. 6(d) where the color and global contrast are visually appropriate.

Local Contrast Compensation
During the luminance-modification process in Sec.2.2, SNRF lights up small intensities and pulls down large intensities in the luminance image, and therefore local contrast is prone to be degenerated.In this section, local contrast will be improved to produce vivid images.The basic idea for contrast enhancement is that a pixel should be increased if it is larger than its neighbors, and should be decreased if it is smaller than its neighbors.Based on this idea, pixel-wise differences and ratios between the center pixel and its neighbors were employed by ACE 11 and retinex algorithms, 12 respectively.In addition, Tao et al. 15 utilized the comparison mechanism in the center/surround retinex [8][9][10] to improve local contrast via an exponential function ; t e m p : i n t r a l i n k -; e 0 1 6 ; 6 3 ; 5 6 5 Oði; jÞ ¼ ½Iði; jÞ Eði;jÞ ; (16)   where Iði; jÞ is a gray image, e.g., luminance image or a channel in RGB space, and the implicit pixel intensity is in the range of [0, 1].The exponent factor Eði; jÞ is defined as ; t e m p : i n t r a l i n k -; e 0 1 7 ; 6 3 ; 4 8 9 Eði; jÞ ¼ F Ã Iði; jÞ Iði; jÞ where the notation F represents a low-pass filter, and the operator * denotes convoluting operation.With regard to the filter F, Gaussian filter was used in Ref. 15, and bilateral filter 18 was used later in Ref. 23 to eliminate halo artifacts.The parameter p controls the enhancing degree and is set into the range of [0.5, 2] according to global standard deviation.Mathematically, if a center pixel Iði; jÞ is smaller than the weighted average of its neighbors, i.e., Iði; jÞ < F Ã Iði; jÞ, the input intensity will be decreased.Otherwise, Eði; jÞ is smaller than 1, and thus the input intensity will be increased to some extent.However, using Eq. ( 16) for contrast enhancement, dim areas are prone to be exaggerated and bright areas cannot be enhanced sufficiently.Figure 7(c) shows the output by implementing Eq. ( 16) on Fig. 7(b), which is the processed result after Sec.2.3.In Fig. 7(c), details of the vines on the wall are excessively enhanced and almost grayed out, especially in the lower right region of the image.On the other hand, contrast between the sky and cloud is improved slightly compared to Fig. 7(b).To solve these problems, we propose to modify Eq. ( 16) by the symmetric formula of it and obtain where BF denotes the bilateral filter, and BF Ã Iði; jÞ is used to approximate the intensity level of neighbors.In Eq. ( 18), if a center pixel is smaller than its neighbors, it will be decreased by the same technique as in Eq. ( 16).Otherwise, the center pixel will be pulled up by the symmetric version of Eq. ( 16). Figure 7(d) shows the output by implementing Eq. ( 18) on Fig. 7(b).Compared to Fig. 7(c), details of the vines are revived without extreme exaggeration.Moreover, the contrast of original bright areas, e.g., sky and cloud, is also improved properly.

Experimental Results
The proposed approach has been tested on images that suffer from underexposure, overexposure, or both problems.
Comparisons have been made with some classic algorithms, including the multiscale retinex algorithm MSRCR, 10 RACE, 12 which is a locally fused version of RSR 5 and ACE 11 algorithms, and an algorithm 20 that used Naka-Rushton formula for tone mapping.Moreover, the proposed approach has also been compared with several recently proposed algorithms: algorithms that emphasized naturalness preservation, 24,25,35 the original SNRF algorithm, 23 and the algorithm using alpha rooting. 26Performances of image-enhancement algorithms can be evaluated via some objective measures 26,36,37 that take colorfulness, contrast, or sharpness into account.In this paper, the color-image-enhancement (EMEC) measure 26 and the color-quality-enhancement (CQE) measure 37 will be used for facilitating comparisons between different enhancing results.
3.1 Enhancing Results for Underexposure Images Figure 8(a) was provided by Gehler et al., 39 and it shows a scene where the foreground is underexposed and the background is exposed properly.In Fig. 8, the CQE and EMEC values are listed below every image.As shown in Fig. 8(b), MSRCR brings good contrast, thus having large CQE and EMEC values.However, it suffers from halo artifacts, such as the edge between the trunk and background meadowland.In Fig. 8(c), the output color by RACE has been grayed out slightly because RGB channels were processed separately.In Fig.For further comparisons, Fig. 9 shows the zoomed-in partial regions cropped from Fig. 8, and the CQE and EMEC values of these partial regions are also listed below the images in Fig. 9. Comparing Figs.9(g) with 9(i), their  CQE and EMEC values are comparative, but image details are more clear in Fig. 9(i).

Enhancing Results for Images with Both
Underexposure and Overexposure Figure 13(a) shows an image with "a car in the sunset," and is gained by courtesy of Greenspun. 40The car is underexposed and the sunset area is overexposed.Output of MSRCR is given in Fig. 13(b) where details of the car are revealed clearly.However, the forehead of the car suffers from halo artifacts.As shown in Fig. 13 13(f) where image luminance has been adapted successfully.However, image contrast still needs to be enhanced.For example, the contrast between sky and cloud is rather inferior to the original image in Fig. 13(a).In Fig. 13(g), image luminance is moderate, but original dark regions under the car are exaggerated.In addition, global image color is distorted and slightly tends to red, such as the sky.   the original red car slightly turns blue.Output of the proposed approach is illustrated in Fig. 13(i).Compared with the previous results in Fig. 13, Fig. 13(i) shows clear details and good contrast without halo artifacts and color distortion.Moreover, the CQE value of Fig. 13(i) is larger than other images in Fig. 13.In addition, Fig. 13(i) has the second-largest EMEC value among the images in Fig. 13, and its EMEC value is a little smaller than Fig. 13(d).We can notice that the clouds around the "sun set" area has been mainly washed out in Fig. 13(d).
Figure 14(a) shows a scene that contains a spacecraft.The bottom of the spacecraft is dark, and details of other areas are slightly washed out due to overexposure.Output of MSRCR is given in Fig. 14(b), and it shows good global contrast.However, the original dark regions suffer from halo artifacts.For instance, edges of the original dark bottom of the spacecraft are still dark and other regions of the bottom are revealed.Figure 14(h) shows the result of an alpha-rooting algorithm, 26 and it has relatively large CQE value.Among the images in Fig. 14

Contrast Enhancement for Images with Normal
Exposure Performance of the proposed approach and several algorithms on under or overexposure images have been compared in previous sections.In fact, when dealing with an image, image processing algorithms do not know which exposure problem the input image has.Consequently, even if the input image has proper luminance, it will be modified as well.Therefore, in this section, images with proper luminance are utilized as the input images show the performances of different image-enhancement algorithms.
A colorful scenery taken under natural light is shown in Fig. 16      result of MSRCR is given in Fig. 17(b), where color distortion occurs.For instance, the hair and face, which are surrounded by red hat and scarf, are slightly green.As illustrated in Fig. 17       According to the CQE and EMEC measures, the proposed algorithm produces better result than other methods that are involved in Fig. 17.In detail, compared with the previous results in Figs.17(b)-17(f), the proposed output shows better contrast and color.For example, the face in Fig. 17(i) is much clearer, and the textures on the hat and scarf are enhanced more effectively than others.

Conclusion
In this work, we presented a local enhancement approach for nonuniform illumination images where details are corrupted by underexposure or overexposure.The proposed approach modifies the luminance component of an image to light up underexposure and darken overexposure.First, to estimate the luminance component, pixel-wise JND values are integrated with the Gaussian filter to preserve edges while smoothing the Y channel in YCbCr space.Then, to discriminate between underexposure and overexposure, a pixel-wise demarcation is devised based on local and global luminance levels.For luminance modification, SNRF is employed to increase underexposure pixels and decrease overexposure pixels.Next, to reconstruct a natural color image, an exponential technique is formulated to combine the modified luminance with the original RGB components.Finally, to improve local contrast that is prone to be degenerated through luminance modification, a local-image-dependent exponential method is designed and applied to the reconstructed color image.
To validate the effectiveness of the proposed approach, experimental tests were made on four types of images: underexposure images, overexposure images, images with both underexposure and overexposure, and images with normal exposure.Moreover, comparisons were made between the proposed method and other solutions, including retinex-based algorithms (MSRCR, 10 RACE 12 ), some recent algorithms 24,25,35 for tone mapping, an alpha-rooting method 26 and algorithms 20,23 related to SNRF.Comparisons between experimental results show that the proposed algorithm has the merit of good contrast and vivid color for enhancing nonuniform illumination images.In addition, comparisons also demonstrate that the proposed algorithm enhances contrast more effectively for images with normal exposure.

Fig. 1
Fig.1Flowchart of the proposed approach and intermediate results after the corresponding steps.

Fig. 2
Fig. 2 Comparison between several smoothing filters.(a) original image, (b) result by Gaussian filter with the spatial spread of 4, (c) output of bilateral filter where the spatial and range spreads are 4 and 25 (suppose that pixel value is in the range of [0, 255]), (d) output of bilateral filter where the spatial and range spreads are 4 and 12, (e) result of JND-based filter with the spatial spread of 4, and (f) result of JNDbased filter with the spatial spread of 8.

Fig. 5 A
Fig. 5 A color image and its T image.

Fig. 7
Fig. 7 Comparative example for local contrast compensation.(a) Original color image, (b) intermediate result after Secs.2.2 and 2.3, (c) result by applying Eq. (16) to the RGB channels of image (b), and (d) result by applying Eq. (17) to the RGB channels of image (b).
8(d), Meylan's algorithm 20 sacrifices details in the bright background because global luminance has been increased via Naka-Rushton formula.Fortunately, because details in original dark regions have been revived, Fig. 8(d) has relatively large CQE and EMEC values.

Figure 8 (
e) gives the output of Ref. 24, and the original dark foreground has been revealed clearly.However, observing the MacBeth-color-checker board in the lower right corner, color blocks are corrupted at their edges.Due to the corruption of small/slight edges, Fig. 8(e) does not possess large CQE and EMEC values.
output seriously.Note that, Fig. 8(g) shows good global contrast, thus having large CQE and EMEC values.As the output of Ref. 23, Fig. 8(h) shows clear details.However, similar to Fig. 8(b), color shift occurs at the stones and soil at the bottom of the image.Figure 8(i) exhibits the result of the proposed algorithm, where details of the foreground are revived clearly and the background is protected from being washed out.Although the CQE and EMEC values of Fig. 8(i) are slightly smaller than Fig. 8(g), the details in Fig. 8(i) are better than Fig. 8(g), such as the tree trunks.

Figure 10 (
a) shows an image where the color is slightly washed out and contrast is depressed due to overexposure.As demonstrated in Figs.10(b) and 10(c), MSRCR and RACE algorithms enhance image contrast slightly, but the global luminance is still quite bright.Correspondingly, the CQE and EMEC values in Figs.10(b) and 10(c) are slightly larger than Fig. 10(a).Output of Ref. 20 is shown in Fig. 10 (d), and it is rather overexposed because the Naka-Rushton formula has been used to pull up pixel intensities globally.Figure 10(e) gives the output of Ref. 24, and overexposure has not been solved effectively, which also can be reflected by the CQE and EMEC values.Shin et al.25 applied a histogram-equalization procedure to the modified probability density function of an image after gamma correction, thus obtaining proper luminance in Fig.10(f).However, they used the method in Eq. (14) for color reconstruction, and therefore image color is depressed.Consequently, the CQE and EMEC values of Fig.10(f) are still close to that of Fig.10(a).As shown in Fig.10(g), the algorithm in Ref.35 modifies luminance effectively and produces good contrast, but image color is dim because of the same reason with Fig.10(f).Reference 23 applied SNRF separately to RGB channels, and the result is given in Fig.10(h).It can be seen that global luminance has been modified properly, but image contrast still needs improvement.At last, output of the proposed algorithm is given in Fig.10(i), which shows proper luminance and promising contrast.Compared with Figs.10(b)-10(e), the luminance in Fig. 10(i) is more suitable.Moreover, compared with Figs.10(f) and 10(g), which also have proper luminance, Fig. 10(i) shows more vivid colors.Therefore, the CQE and EMEC values of Fig. 10(i) are larger than others in Fig. 10. Figure 11(a) demonstrates an image where contrast is degenerated due to overexposure.As shown in Figs.11(b) (a) CQE=0.67,EMEC=11.60(b) CQE=1.16,EMEC=15.62 (c) CQE=0.74,EMEC=10.43(d) CQE=1.14, EMEC=17.46(e) CQE=0.75,EMEC=11.43 (f) CQE=0.72,EMEC=11.50 (g) CQE=0.84,EMEC=13.70 (h) CQE=0.99,EMEC=14.69 (i) CQE=1.26,EMEC=17.26

and 11 (
c), local contrast is improved slightly by MSRCR and RACE.The results of Refs.20 and 24 are illustrated in Figs.11(d) and 11(e) where the enhancing effects are not visually obvious.Consequently, the CQE and EMEC values of Figs.11(b)-11(e) are not much larger than Fig. 11(a).Compared with Figs.11(d) and 11(e), the luminance of the last four images in Fig. 11 is proper.However, the colors of Figs.11(f) and 11(g) are depressed due to their colorreconstruction mechanism and tend to dim.In Fig. 11(h), local contrast still needs to be improved, such as the contrast between the sky and clouds.Figure 11(i) shows the result of the proposed approach, and the image shows clear details and good contrast without color distortion.It is shown that the CQE values of Figs.11(g) and 11(i) are comparative, and Fig. 11(g) has larger EMEC value than Fig. 11(i).This is because that Ref. 35 dimmed the image obviously and obtains good local contrast in Fig. 11(g).However, image color is also dimmed excessively in Fig. 11(g) and global contrast is sacrificed.For instance, Figs.12(a) and 12(b) are cropped from Figs. 11(g) and 11(i), respectively.We can see that the contrast in Fig. 12(b) is better.
(c), RACE promotes image details in the dark regions, whereas image color is corrupted because RGB channels are treated separately.The output of Ref. 20 is illustrated in Fig. 13(d) where the sunset area is degenerated because Naka-Rushton formula always increases input intensities.Figure 13(e) shows the result of Ref. 24, and global details have been improved effectively.However, luminance of original dark areas has been increased excessively, such as the bottom of the car.The result of Ref. 25 is given in Fig.
Fig. 13(h) shows the output of Ref. 23, and it shows clear details and good contrast.However, image color is distorted because Ref. 23 applied SNRF formula to RGB channels in parallel.For instance,
, the outputs of Ref.35 and the proposed algorithm are more satisfying.In detail, Fig.14(g) has the largest CQE value, and Fig.14(i) has the largest EMEC value.For detailed comparison, partial regions of Figs.14(g) and 14(i) are illustrated in Fig.15.It is shown in Fig.15(b)that the bottom of the spacecraft has been reviewed more clearly by the proposed algorithm than Ref.35.
(c), RACE improves image contrast slightly.Consequently, the CQE and EMEC values of Figs.17(b) and 17(c) are not quite larger than Fig.17 (a).The results of Refs.20 and 24 are given in Figs.17(d) and 17(e), and both of the contrast in these two images need to be improved.As shown in Fig.17 (f), the method in Ref. 25 enhances contrast effectively.However, Fig.

17
(f) tends to darken slightly, such as the eyes and red scarf.

Figure 17 (
g) exhibits the output of Ref.35, and it is rather dark globally.Figure17(h)shows the result of Ref.23, and the image color is washed out globally due to the separate treatment of RGB channels.Output of the proposed approach is given in Fig.17 (i).