Open Access
19 September 2017 Adaptive enhancement for nonuniform illumination images via nonlinear mapping
Yanfang Wang, Qian Huang, Jing Hu
Author Affiliations +
Abstract
Nonuniform illumination images suffer from degenerated details because of underexposure, overexposure, or a combination of both. To improve the visual quality of color images, underexposure regions should be lightened, whereas overexposure areas need to be dimmed properly. However, discriminating between underexposure and overexposure is troublesome. Compared with traditional methods that produce a fixed demarcation value throughout an image, the proposed demarcation changes as local luminance varies, thus is suitable for manipulating complicated illumination. Based on this locally adaptive demarcation, a nonlinear modification is applied to image luminance. Further, with the modified luminance, we propose a nonlinear process to reconstruct a luminance-enhanced color image. For every pixel, this nonlinear process takes the luminance change and the original chromaticity into account, thus trying to avoid exaggerated colors at dark areas and depressed colors at highly bright regions. Finally, to improve image contrast, a local and image-dependent exponential technique is designed and applied to the RGB channels of the obtained color image. Experimental results demonstrate that our method produces good contrast and vivid color for both nonuniform illumination images and images with normal illumination.

1.

Introduction

In real life, the number of light sources is limited and various objects can block out lights, thus light is usually accompanied by shadows. Therefore, the illumination is not uniform in the field of vision. A scene under nonuniform illumination tends to appear extremely bright in some regions, while some other regions succumb to the dark. However, a color image produced by electronic equipment carries not only the inherent reflectance characteristics of the scene but also the lights irradiating on it. Consequently, the image of the scene that is under nonuniform illumination usually suffers from underexposure and overexposure. To improve the degenerated details caused by nonuniform illumination, many image-enhancement algorithms have been published. These algorithms can be roughly categorized as algorithms based on the retinex theory,1 methods that apply nonlinear modification to image luminance (note that, in this paper, “illumination” refers to a triplet image with the same size of the image that is to be processed, and represents the brightness and chrominance characteristics of the light condition of the scene; “luminance” is a gray image that denotes the brightness of the illumination; “intensity” means the pixel value in a single channel of RGB space), and algorithms that are devised in some transformed space.

In the first category, retinex theory was proposed to model the visual perception mechanism of HSV. In general, a basic idea is that the perceived brightness of an object in every RGB channel is determined by the relative brightness between it and its neighbors. Accordingly, there are two critical factors during the implementation of retinex theory: how the relative brightness is computed using spatial comparisons, and how the neighbors are selected and combined. In the literature, many variants of retinex have been published. The pathwise retinex1 was first proposed, where every pixel value is reset based on a set of random piecewise linear paths. To mimic the visual surround of HVS, a large number of paths are needed around every pixel, thus leading to high computational cost.2 To cope with this, Marini and Rizzi3 replaced the random paths by Brownian paths, and Funt et al.4 implemented the random pathwise retinex on a set of subsampled images. Further, motivated by the two-dimensional (2-D) characteristic of visual surround, Provenzi et al.5 used a set of 2-D random sprays to replace one-dimensional paths. For a target pixel and a random spray around it, the ratio of the target pixel to the maximum value in the spray is computed. And finally, the target pixel is renewed by the average of these ratios. However, this random-spray retinex algorithm5 (RSR) is prone to induce white noises at large uniform regions. In addition, the enhancing effect of RSR is not obvious when the image contains highly-bright pixels. These pathwise retinex14 and RSR5 algorithms select a set of neighbors and use the local maximum value as the referenced white points. The length of the paths and diameter of the sprays can significantly influence the results, and the optimal settings of these crucial parameters vary among different images. Recently, to reduce the white noises induced by RSR, Gianini et al.6 proposed a quantile-based implementation of retinex, where a weighted version of histogram was built at every target pixel and a quantile was determined to be a referenced white point. However, small quantiles caused the result image to be whitish, while large quantile produces inadequate enhancement for dark areas. In addition, Land7 proposed the center/sound retinex, where a target pixel (center) is renewed by its ratio to the weighted average of some neighbors (surround). Later, Jobson et al.8 refreshed the center/surround retinex with a Gaussian filter. The spatial spread scale of the Gaussian filter should be set carefully: small scale produces good details at the cost of tonal information, and large scale is not effective for contrast enhancement. To resolve this problem, Jobson et al.9,10 proposed the multiscale retinex (MSR) algorithm by averaging the output of three single-scale retinex algorithms. However, because of the isotropic characteristic of Gaussian kernel, center/surround retinex algorithms tend to induce halo artifacts at high-contrast edges.

In retinex algorithms, spatial comparisons between a target pixel and its neighbors are implemented via division. Different from retinex, in their work of automatic color equalization (ACE), Rizzi et al.11 employed the weighted average of the differences between the target pixel and neighbors to renew every target pixel. Similar to the threshold mechanism in pathwise retinex,1 the “difference” was transformed nonlinearly into a fixed range. In every RGB channel, these differences can be positive or negative, and thus local contrast is enhanced adaptively. ACE performs well on underexposure and overexposure, but is prone to corrupt large uniform areas and wash out colors. Further, to combining the advantages of ACE and RSR, Provenzi et al.12 proposed a local fusion of them, called RACE algorithm, via the 2-D random spray that was first utilized in RSR.

In the second category, to enhance details for dark and highly-bright regions, image luminance is estimated first and then modified nonlinearly. In general, image luminance can be roughly estimated based on some color space that separates the brightness/lightness from chromatic components, such as HSV and YCbCr spaces. Therefore, algorithms in this category focus on the nonlinear modification technique that is used for luminance modification. Some authors tried to adjust global image luminance via histogram modification.13,14 Although it is effective in handling global dynamic range, histogram-based algorithms ignore the spatial relations of pixels, and are prone to be affected by the spikes in image histograms.

To enhance local contrast, Tao et al.15 proposed to increase the luminance of dark pixels and decrease the luminance of highly bright pixels via an inverse sigmoid function. Note that the demarcation between underexposure and overexposure was not defined precisely in Ref. 15. After dynamic range compression, Tao et al.15 utilized the comparison mechanism in MSR to enhance local contrast in luminance channel. Finally, the enhanced color image was reconstructed by preserving the hue and saturation information of the original image. However, this color-restoration method is apt to produce excessive colors for original dark regions. Meylan and Susstrunk16 proposed to implement global luminance adaptation through a power function according to the original average luminance. Then, the local contrast is also enhanced based on the comparison mechanism of MSR. However, due to the global luminance-adaptation procedure, original highly bright areas are prone to be compressed excessively. Based on the assumption that the expected value of the enhanced luminance is half of the maximum value in the full dynamic range, Schettini et al.17 modified the traditional gamma correction via an automatic parameter-tuning technique for the gamma value. To avoid smoothing across edges, Schettini et al.17 utilized the bilateral filter18 instead of Gaussian filter, thus reducing halo artifacts. Choudhury and Medioni19 utilized the V channel in HSV space as image luminance, and designed a nonlinear transformation based on logarithmic function. Underexposure and overexposure were divided according to the proportion of pixels that are smaller than 0.5 (the full dynamic range is [0, 1]). Note that this division is fixed for the whole image, thus is not suitable for images that have very small area of underexposure or overexposure. In addition, the color-restoration procedure in Ref. 19 is similar to that of Ref. 15 by preserving original chromaticity, and this is prone to lead to exaggerated colors at dark regions.

In addition, Meylan et al.20 proposed to consecutively use two Naka–Rushton21 transformation for nonlinear adaption. Since Naka–Rushton function can increase input values, it performs well on underexposure images. Inspired by Meylan’s work,20 Wang and Luo22 intensively explored the adaption mechanism of Naka–Rushton formula and designed adaptive parameter settings for it, and obtained an effective enhancement for underexposed image. Furthermore, to enhance images with partially overexposure, the symmetric version of Naka–Rushton formula (SNRF) was proposed in Ref. 23 to pull up small intensities and pull down large intensities in RGB channels.

Recently, using the frequency of local and nonlocal neighboring pixels, Wang et al.24 proposed a bright-pass filter to estimate image luminance. Further, to preserve the lightness order while modifying luminance, they used a histogram specification technique. However, due to the preservation of lightness order, highly bright regions cannot be enhanced adequately. For image luminance estimation, Shin et al.25 implemented Gaussian smoothing on the V channel, and then combined the smoothed luminance with the original one according to gradient information to eliminate smoothing at edges. Thereafter, luminance was adjusted through gamma correction, and global contrast was enhanced through histogram modification. Since local contrast was not boosted after gamma correction, the processed results by Ref. 25 present moderate luminance but deficient contrast.

In the third category, image enhancement was implemented in some transformed space. That is to say, a certain 2-D orthogonal transform is first performed on the input image and then the transform coefficients will be modified accordingly. These types of algorithms include alpha rooting,26,27 logarithmic enhancement,28,29 and heap transform.30 In Ref. 26, Grigoryan et al. proposed a multi-frequency band alpha-rooting method, and combined it with a two-dimensional discrete quaternion Fourier transform for image enhancement. In Ref. 29, Panetta et al. summarized the logarithmic-image-processing framework for image enhancement.

In this work, to enhance images with nonuniform illumination, a local adaption approach is developed. First, to estimate image luminance, a just-noticeable-difference (JND)-based low-pass filter is devised and implemented on the Y channel of YCbCr space. Then, to separate underexposure from overexposure in the luminance image, a local adaptive demarcation principle is proposed. Different from the globally fixed demarcation value used by Tao et al.15 and Choudhury and Medioni,19 the proposed demarcation changes as local luminance varies, thus adapting to complicated luminance and helping to control the enhancement degree for various regions in an image. According to this adaptive demarcation, local luminance is modified by SNRF to depress highly bright areas and light up dark areas. Next, a color image will be reconstructed based on the enhanced luminance and the original chromatic information. With regard to color reconstruction, traditional methods15,19 that preserve the original chromaticity are prone to exaggerate colors at dark areas and depress colors at highly bright areas. To cope with this, the proposed color-reconstruction technique utilizes a power function of the ratio between the enhanced luminance and the original one. Finally, a compensation technique for local contrast is designed to improve visual quality of the output color image. Experimental results show that the proposed approach produces moderate details and vivid colors.

The rest of the paper is organized as follows: the proposed algorithm is detailed step by step in Sec. 2. Experimental results and comparisons are presented in Sec. 3. Finally, conclusions are given in Sec. 4.

2.

Image Enhancement via Nonlinear Mapping

Flowchart of the proposed approach is given in Fig. 1, and a sample image with intermediate results is illustrated as well. Precisely, our method consists of four steps: (1) in Y channel, image luminance is estimated using a JND-based filter, which preserves high-contrast edges while implementing local smoothing; (2) to adjust local luminance adaptively, the estimated luminance is modified by SNRF to pull up underexposure and pull down overexposure; (3) for the sake of natural colors, image color is reconstructed via an exponential function based on the enhanced luminance, original luminance, and original chromatic information; (4) to compensate for the degenerated contrast after dynamic range compression by SNRF in step 2, a local contrast compensation technique is applied to the RGB channels of the produced color image.

Fig. 1

Flowchart of the proposed approach and intermediate results after the corresponding steps.

JEI_26_5_053012_f001.png

2.1.

Luminance Estimation Using Just-Noticeable-Difference-Based Filter

Accurate estimation of luminance from a color image is complicated and difficult. Based on the assumption that illumination is spatially smooth, Gaussian filter810,15 was used for illumination estimation. However, these methods are prone to induce halo artifacts at high-contrast edges due to the smoothing across edges. To cope with this, many researchers turned to devise locally adaptive filters, such as bilateral filter,17,22,23 gradient-driven smoothing operators,19 and image-content-dependent smoothing techniques.16,24,25 The common starting point among these adaptive filters is to decrease the smoothing degree at high-contrast edges. In this section, we proposed an adaptive smoothing filter based on the JND of pixel values. This filter does not depend on the identification of high-contrast edges, by truncating intensities of neighbors according to JND value of the center pixel.

This work focuses on the nonuniform illumination problem, and is under an assumption that the illumination is neutral and uncolored throughout the whole image. In addition, luminance is used here to denote the brightness of illumination. Among existing color spaces, the Y channel in YCbCr space is used as a coarse luminance. Mathematically, the Y channel can be linearly transformed from RGB space by

Eq. (1)

Y(i,j)=0.2989R(i,j)+0.578G(i,j)+0.114B(i,j),
where R, G, and B are the intensities of RGB channels, respectively, and (i,j) represents pixel location in the two-dimensional spatial space of an image.

JND refers to the value below, which any change cannot be visually perceived by the HVS.31 Lin32 did a survey on the computational models for JND including the model from pixel domain. In luminance component, luminance adaptation and texture masking were two major factors to be considered for pixel-wise JND estimation33

Eq. (2)

PJND(i,j)=TL(i,j)+Tt(i,j)CLt(i,j)·min{TL(i,j),Tt(i,j)},
where TL(i,j) and Tt(i,j) are the thresholds for luminance adaptation and texture masking, respectively; CLt(i,j) represents the overlapping effect, and 0<CLt(i,j)1. In detail, luminance-adaptation threshold TL(i,j) is formulated as34

Eq. (3)

TL(i,j)={17[1L(i,j)127]+3if  L(i,j)1273128[L(i,j)127]+3otherwise,
where L(i,j) denotes the background luminance of the pixel located at (i,j), and can be computed by locally averaging operation within a small (e.g., 3×3) neighborhood. Note that, Eq. (3) implies that pixel intensities of a gray image are in the range of [0, 255]. In Eq. (2), apart from TL(i,j), the texture masking factor Tt(i,j) can be estimated after smooth, edge, and texture regions have been discriminated. For simplification, lightness adaptation can be considered as the dominant factor in JND estimation.32 Therefore, the pixel-wise JND will be roughly estimated by TL(i,j) in this paper, which means

Eq. (4)

PJND(i,j)TL(i,j).

To preserve high-contrast edges while smoothing an image via a weighted average of neighbors, an intuitive idea is that a neighbor, which has a large numerical difference from the center pixel, should be carefully treated. For ease of presentation, we call the neighbors that need to be carefully treated as odd neighbors. Motivated by the intuitive idea mentioned already, bilateral filter18 decreases the weights for odd neighbors by multiplying the Gaussian kernel in spatial domain with a Gaussian kernel in the range domain. In this work, we devised an alternative method that manipulates odd neighbors to be similar values with the center pixel. Mathematically, the values of neighbors will be truncated according to the corresponding JND values of the center pixel. With regard to a center pixel, its neighbors can be divided into three categories, and their values will be truncated through

Eq. (5)

Q(B,A)={APJND(A)if(BA)<PJND(A)Bif  abs(BA)PJND(A)A+PJND(A)if  (BA)>PJND(A),
where A denotes the luminance value of a center pixel, and B represents the luminance value of a neighbor around pixel A. The notation Q(B,A) is the truncated output for a pixel with the value of B when it acts as a neighbor of a pixel A. In addition, PJND(A) is the corresponding JND value of pixel A, and is roughly computed using Eq. (4).

Based on the truncating mechanism in Eq. (5), image luminance can be estimated to be the local weighted average of JND-truncated neighbors. Consequently, this averaging process can be implemented by assigning spatial Gaussian weights to the JND-truncated neighbors, and the JND-based filter is defined as

Eq. (6)

YJND(i,j)=1w(i,j)(s,t)Ω(i,j)G[(i,j),(s,t)]·Q[Y(s,t),Y(i,j)],
where (i,j) is the location of a center pixel, and (s,t) denotes an arbitrary pixel in the image. The output YJND(i,j) is the adaptively smoothed luminance. The notation w(i,j) is the normalization factor, and Ω(i,j) represents a specified neighborhood of pixel (i,j). The Gaussian weight G[(i,j),(s,t)] is defined by

Eq. (7)

G[(i,j),(s,t)]=exp{12σs2[(is)2+(jt)2]},
where σs denotes the spatial spread of Gaussian kernel. According to Eqs. (6) and (7), the estimated luminance of pixel (i,j) equals to the Gaussian weighted average of its JND-truncated neighbors. Note that, the JND-truncated neighbors have similar values to the center pixel based on Eq. (5). Therefore, the proposed JND-based filter can preserve edges while smoothing an image. Figure 2 exhibits the estimated luminance by Gaussian filter, bilateral filter, and the proposed JND-based filter, respectively. As shown in Figs. 2(c)2(f), both bilateral filter and JND-based filter preserve high-contrast edges effectively. Figures 2(e) and 2(f) are results of JND-based filter with different spatial spreads, and these two images show little difference. Apart from the spatial spread, the smoothing degree of bilateral filter is also determined by range spread. Figures 2(c) and 2(d) are results of bilateral filter with different range spreads. In Fig. 2(c), the range spread is larger than that of Fig. 2(d), resulting a smoother image.

Fig. 2

Comparison between several smoothing filters. (a) original image, (b) result by Gaussian filter with the spatial spread of 4, (c) output of bilateral filter where the spatial and range spreads are 4 and 25 (suppose that pixel value is in the range of [0, 255]), (d) output of bilateral filter where the spatial and range spreads are 4 and 12, (e) result of JND-based filter with the spatial spread of 4, and (f) result of JND-based filter with the spatial spread of 8.

JEI_26_5_053012_f002.png

2.2.

Adaptive Modification to Luminance

Within a luminance image, underexposure can be enhanced by lighting up dark regions, and overexposure can be solved by dimming pixels that are extremely bright. Inspired by the above idea, we adopt the symmetric Naka–Rushton formula (SNRF) proposed in our previous publication23 to implement luminance modification. Precisely, SNRF is derived from the original Naka–Rushton equation21 through a symmetric transformation. The original Naka–Rushton equation was defined as

Eq. (8)

V=LL+H,
where L and V denote the input and output signal, respectively. The parameter H controls the degree of adaptation. For an image, the normalized Naka–Rushton equation can be expressed as

Eq. (9)

LN(i,j)=L(i,j)L(i,j)+HLMax+HLMax,
where (i,j) denotes pixel location, and LMax is the maximum intensity in the image L. Note that for a standard 24-bit color image, pixel intensity is in the range of [0, 255]. When using Eq. (9), pixel intensity needs to be rescaled into the range of [0, 1] by the division L/255. Being applied to an input with the range of [0, 1], the normalized Naka–Rushton formula has a upper convex curve, which implies that this formula can be utilized for lighting up underexposure regions or pixels with small intensities. An illustrative example is shown in Fig. 3 (see two curves with legend of “Naka–Rushton” for details). In contrast, the formula that is symmetric with Naka–Rushton formula about the point (0.5, 0.5) can be used for dimming large intensities. Further, SNRF was formulated by integrating these two formulas at a point that is used as the demarcation between underexposure and overexposure. Being applied to a luminance image, SNRF was formulated as

Eq. (10)

Ysym={YY+Hlow(T+Hlow)0<YT11Y(1Y)+Hhigh[(1T)+Hhigh]T<Y1,
where Y is the original luminance image and Ysym is the modified luminance by SNRF. The parameter T represents the pixel-wise demarcation between underexposure and overexposure. The other two parameters, namely Hlow and Hhigh, control the degree of adaptation produced by SNRF. For saving space, the same location index (i,j) was omitted from Y, Ysym, T, Hlow, and Hhigh in Eq. (10). A pixel Y(i,j) that is smaller than the corresponding demarcation T(i,j) is treated to be underexposure. Otherwise, it is deemed to be overexposure.

Fig. 3

Curves with different parameters. The curves with the legend of “Naka–Rushton” are those for Eq. (9). The curves with the legend of “SNRF” are those for Eq. (10).

JEI_26_5_053012_f003.png

Figure 3 also illustrates several curves of SNRF with different parameters. For concise expression, it is assumed that Hlow=Hhigh=H in Fig. 3. Comparing the two curves composed of plus and dot, we can see that larger value of T leads to larger global output. In addition, with a fixed T=0.6, the curve of SNRF approaches to the line “output = input” as H increases. In other words, the output Ysym increases as H decreases when Y<T, and the opposite situation occurs when Y>T.

To manipulate complicated luminance adaptively, the demarcation T between underexposure and overexposure is set to be pixel-wise. As for SNRF, smaller demarcation T leads to larger output. Therefore, T is set to vary inversely with local luminance to assign more increment to darker regions. In addition, T also changes contrarily to global luminance to light up globally dark images. Mathematically, the demarcation T is devised to be a transformed version of the sigmoid function

Eq. (11)

T(i,j)=1Ymedian1+exp{10[YJND(i,j)0.7]},
where Ymedian is the median intensity value in the input luminance image Y, and is used to represent global luminance. The notation YJND is the estimated luminance by JND-based filter in Sec. 2.1, and is utilized to measure local luminance. Curves of Eq. (11) with different values of Ymedian are illustrated in Fig. 4 where the input is YJND(i,j). In addition, Fig. 5 exhibits a color image and its T image. It is shown that original darker areas correspond to larger T values, and this compels dark regions to be categorized as underexposure. In contrary, original brighter regions correspond to smaller T values.

Fig. 4

Curves of Eq. (11) with different Ymedian.

JEI_26_5_053012_f004.png

Fig. 5

A color image and its T image.

JEI_26_5_053012_f005.png

In SNRF defined by Eq. (10), Hlow works in the first case to control the adaptation degree for underexposure. Moreover, it has been shown in Fig. 3 that the output of SNRF varies inversely with Hlow. To revive underexposure regions, larger increments need to be assigned to locally darker pixels. In addition, severe underexposure also needs obvious promotion. Based on the above analysis, Hlow is formulated as

Eq. (12)

Hlow(i,j)=YJND(i,j)+0.5Ym_low,
where YJND represents local luminance. The notation Ym_low denotes the mean value of pixels that are categorized to be underexposure by the pixel-wise demarcation T(i,j).

Different from Hlow, Hhigh serves as the adaptation factor in the symmetric version of Naka–Rushton formula, i.e., the second case in Eq. (10), for enhancing overexposure regions. And Fig. 3 has shown that the output of SNRF changes directly with Hhigh. Consequently, Hhigh is set as

Eq. (13)

Hhigh(i,j)=2YJND(i,j)*(1Ym_high),
where Ym_high represents the mean value of pixels that are categorized to be overexposure by the pixel-wise demarcation T(i,j). The value of Ym_high gets larger if the overexposure regions are brighter, and then obvious decrement will be assigned to overexposure regions due to smaller Hhigh values.

Substituting the elaborative parameter settings in Eqs. (11)–(13) into SNRF in Eq. (10), an adaptive luminance-modification technique is obtained for enhancing images that suffer from exposure problems. Further, to improve the global contrast of the output of SNRF, histogram of Ysym is linearly stretched into the range of [0, 1].

2.3.

Color Image Reconstruction

With the modified luminance image, a color image will be constructed in this section based on the chromatic information of the original color image. At present, in many works15,19,24,25 that modify the luminance channel, the final color image was restored by strictly preserving the original chromaticity using the following mechanism:

Eq. (14)

{R(i,j)=R(i,j)Y(i,j)Y(i,j)G(i,j)=G(i,j)Y(i,j)Y(i,j)B(i,j)=B(i,j)Y(i,j)Y(i,j),
where Y denotes the modified luminance and Y is the original luminance. Notations R, G, and B represent the RGB channels of the original color image, and R, G, and B are the reconstructed RGB channels. However, the color-construction process in Eq. (14) is prone to exaggerate the color for underexposure regions, and depress the color of overexposure regions. Consider an underexposure pixel (i,j) in the original color image, its luminance has been increased and we have Y(i,j)>Y(i,j). Mathematically, the differences among the RGB triplet [R(i,j), G(i,j), B(i,j)] will be enlarged by Eq. (14). Thus, compared to the original color image, the chroma of (i,j) is magnified and the color appearance is exaggerated. Figures 6(a) and 6(b) illustrate an input image and the reconstructed color image via Eq. (14). Without loss of generality, the modified luminance Y used in Fig. 6 is processed by a general gamma correction method rather than SNRF. The gamma value is set to be 1+Y(i,j)0.7, and therefore pixels, which are smaller than 0.7, will be promoted and others will be dimmed. It is shown that in Fig. 6(b) the blue car and green trees in the original dark regions suffer from color distortion.

Fig. 6

(a) original image, (b) color-reconstruction result by Eq. (14), (c) color-reconstruction output by the method in Ref. 17, and (d) color-reconstruction result by the proposed technique in Eq. (15).

JEI_26_5_053012_f006.png

In addition Eq. (14), Ref. 17 utilized a linear combination of the ratio and the difference between Y and Y for color reconstruction. The corresponding color-reconstruction output is given in Fig. 6(c). It can be seen that the excessive color of original dark regions was alleviated, but global contrast gets worse.

Inspired by the color-restoration mechanism in Eq. (14), to alleviate the magnification of differences between RGB triplets for underexposure regions and maintain the RGB differences for overexposure regions, we propose to reconstruct the color image by

Eq. (15)

{R(i,j)=R(i,j)[Y(i,j)Y(i,j)]1sqrt[R(i,j)]G(i,j)=G(i,j)[Y(i,j)Y(i,j)]1sqrt[G(i,j)]B(i,j)=B(i,j)[Y(i,j)Y(i,j)]1sqrt[B(i,j)],
where the operator sqrt(·) denotes the extraction of square root. Compared with Eq. (14), the coefficients for RGB components are revised by an adaptive exponent in Eq. (15). For underexposure regions, we have Y>Y, and the corresponding coefficients in Eq. (15) change inversely to R, G, and B values. In this case, for a pixel with the triplet (r,g,b), if r>g>b, we can obtain the inequality that 1<(Y/Y)1sqrt(r)<(Y/Y)1sqrt(g)<(Y/Y)1sqrt(b), and thus the output color will not be excessively exaggerated. On the contrary, for overexposure regions, we have Y<Y, and the corresponding coefficients in Eq. (15) have synchrony changes to R, G, and B values. Therefore, according to the above derivation, the reconstructed color for overexposure regions will be revived. Result of the color-reconstruction method in Eq. (15) is illustrated in Fig. 6(d) where the color and global contrast are visually appropriate.

2.4.

Local Contrast Compensation

During the luminance-modification process in Sec. 2.2, SNRF lights up small intensities and pulls down large intensities in the luminance image, and therefore local contrast is prone to be degenerated. In this section, local contrast will be improved to produce vivid images. The basic idea for contrast enhancement is that a pixel should be increased if it is larger than its neighbors, and should be decreased if it is smaller than its neighbors. Based on this idea, pixel-wise differences and ratios between the center pixel and its neighbors were employed by ACE11 and retinex algorithms,12 respectively. In addition, Tao et al.15 utilized the comparison mechanism in the center/surround retinex810 to improve local contrast via an exponential function

Eq. (16)

O(i,j)=[I(i,j)]E(i,j),
where I(i,j) is a gray image, e.g., luminance image or a channel in RGB space, and the implicit pixel intensity is in the range of [0, 1]. The exponent factor E(i,j) is defined as

Eq. (17)

E(i,j)=[F*I(i,j)I(i,j)]p,
where the notation F represents a low-pass filter, and the operator * denotes convoluting operation. With regard to the filter F, Gaussian filter was used in Ref. 15, and bilateral filter18 was used later in Ref. 23 to eliminate halo artifacts. The parameter p controls the enhancing degree and is set into the range of [0.5, 2] according to global standard deviation. Mathematically, if a center pixel I(i,j) is smaller than the weighted average of its neighbors, i.e., I(i,j)<F*I(i,j), the input intensity will be decreased. Otherwise, E(i,j) is smaller than 1, and thus the input intensity will be increased to some extent.

However, using Eq. (16) for contrast enhancement, dim areas are prone to be exaggerated and bright areas cannot be enhanced sufficiently. Figure 7(c) shows the output by implementing Eq. (16) on Fig. 7(b), which is the processed result after Sec. 2.3. In Fig. 7(c), details of the vines on the wall are excessively enhanced and almost grayed out, especially in the lower right region of the image. On the other hand, contrast between the sky and cloud is improved slightly compared to Fig. 7(b). To solve these problems, we propose to modify Eq. (16) by the symmetric formula of it and obtain

Eq. (18)

O(i,j)={[I(i,j)][BF*I(i,j)I(i,j)]2if  IBF*I1[1I(i,j)][1BF*I(i,j)1I(i,j)]2otherwise,
where BF denotes the bilateral filter, and BF*I(i,j) is used to approximate the intensity level of neighbors. In Eq. (18), if a center pixel is smaller than its neighbors, it will be decreased by the same technique as in Eq. (16). Otherwise, the center pixel will be pulled up by the symmetric version of Eq. (16). Figure 7(d) shows the output by implementing Eq. (18) on Fig. 7(b). Compared to Fig. 7(c), details of the vines are revived without extreme exaggeration. Moreover, the contrast of original bright areas, e.g., sky and cloud, is also improved properly.

Fig. 7

Comparative example for local contrast compensation. (a) Original color image, (b) intermediate result after Secs. 2.2 and 2.3, (c) result by applying Eq. (16) to the RGB channels of image (b), and (d) result by applying Eq. (17) to the RGB channels of image (b).

JEI_26_5_053012_f007.png

3.

Experimental Results

The proposed approach has been tested on images that suffer from underexposure, overexposure, or both problems. Comparisons have been made with some classic algorithms, including the multiscale retinex algorithm MSRCR,10 RACE,12 which is a locally fused version of RSR5 and ACE11 algorithms, and an algorithm20 that used Naka–Rushton formula for tone mapping. Moreover, the proposed approach has also been compared with several recently proposed algorithms: algorithms that emphasized naturalness preservation,24,25,35 the original SNRF algorithm,23 and the algorithm using alpha rooting.26 Performances of image-enhancement algorithms can be evaluated via some objective measures26,36,37 that take colorfulness, contrast, or sharpness into account. In this paper, the color-image-enhancement (EMEC) measure26 and the color-quality-enhancement (CQE) measure37 will be used for facilitating comparisons between different enhancing results.

3.1.

Enhancing Results for Underexposure Images

Figure 8(a) was provided by Gehler et al.,39 and it shows a scene where the foreground is underexposed and the background is exposed properly. In Fig. 8, the CQE and EMEC values are listed below every image. As shown in Fig. 8(b), MSRCR brings good contrast, thus having large CQE and EMEC values. However, it suffers from halo artifacts, such as the edge between the trunk and background meadowland. In Fig. 8(c), the output color by RACE has been grayed out slightly because RGB channels were processed separately. In Fig. 8(d), Meylan’s algorithm20 sacrifices details in the bright background because global luminance has been increased via Naka–Rushton formula. Fortunately, because details in original dark regions have been revived, Fig. 8(d) has relatively large CQE and EMEC values. Figure 8(e) gives the output of Ref. 24, and the original dark foreground has been revealed clearly. However, observing the MacBeth-color-checker board in the lower right corner, color blocks are corrupted at their edges. Due to the corruption of small/slight edges, Fig. 8(e) does not possess large CQE and EMEC values. Figure 8(f) shows the result of Ref. 25. However, details of the trunk are still buried in underexposure, and contrast of the background is degenerated. This is mainly because that original luminance order is strictly preserved. As illustrated in Fig. 8(g), the algorithm in Ref. 35 produces natural background but dark foreground, because the saliency map of the input image affects the output seriously. Note that, Fig. 8(g) shows good global contrast, thus having large CQE and EMEC values. As the output of Ref. 23, Fig. 8(h) shows clear details. However, similar to Fig. 8(b), color shift occurs at the stones and soil at the bottom of the image. Figure 8(i) exhibits the result of the proposed algorithm, where details of the foreground are revived clearly and the background is protected from being washed out. Although the CQE and EMEC values of Fig. 8(i) are slightly smaller than Fig. 8(g), the details in Fig. 8(i) are better than Fig. 8(g), such as the tree trunks.

Fig. 8

Experimental results on an underexposure image. (a) Original image, and others are the results of (b) MSRCR,10 which was implemented in the software “PhogoFlair” by NASA and TueView Imaing Co.,38 (c) RACE,12 (d) Ref. 20, (e) Ref. 24, (f) Ref. 25, (g) Ref. 35, (h) Ref. 23, and (i) the proposed approach.

JEI_26_5_053012_f008.png

For further comparisons, Fig. 9 shows the zoomed-in partial regions cropped from Fig. 8, and the CQE and EMEC values of these partial regions are also listed below the images in Fig. 9. Comparing Figs. 9(g) with 9(i), their CQE and EMEC values are comparative, but image details are more clear in Fig. 9(i).

Fig. 9

Partial regions cropped from Fig. 8, respectively.

JEI_26_5_053012_f009.png

3.2.

Enhancing Results for Overexposure Images

Figure 10(a) shows an image where the color is slightly washed out and contrast is depressed due to overexposure. As demonstrated in Figs. 10(b) and 10(c), MSRCR and RACE algorithms enhance image contrast slightly, but the global luminance is still quite bright. Correspondingly, the CQE and EMEC values in Figs. 10(b) and 10(c) are slightly larger than Fig. 10(a). Output of Ref. 20 is shown in Fig. 10(d), and it is rather overexposed because the Naka–Rushton formula has been used to pull up pixel intensities globally. Figure 10(e) gives the output of Ref. 24, and overexposure has not been solved effectively, which also can be reflected by the CQE and EMEC values. Shin et al.25 applied a histogram-equalization procedure to the modified probability density function of an image after gamma correction, thus obtaining proper luminance in Fig. 10(f). However, they used the method in Eq. (14) for color reconstruction, and therefore image color is depressed. Consequently, the CQE and EMEC values of Fig. 10(f) are still close to that of Fig. 10(a). As shown in Fig. 10(g), the algorithm in Ref. 35 modifies luminance effectively and produces good contrast, but image color is dim because of the same reason with Fig. 10(f). Reference 23 applied SNRF separately to RGB channels, and the result is given in Fig. 10(h). It can be seen that global luminance has been modified properly, but image contrast still needs improvement. At last, output of the proposed algorithm is given in Fig. 10(i), which shows proper luminance and promising contrast. Compared with Figs. 10(b)10(e), the luminance in Fig. 10(i) is more suitable. Moreover, compared with Figs. 10(f) and 10(g), which also have proper luminance, Fig. 10(i) shows more vivid colors. Therefore, the CQE and EMEC values of Fig. 10(i) are larger than others in Fig. 10.

Fig. 10

Experimental results on an overexposure image. (a) Original color image, and others are the results of (b) MSRCR,10,38 (c) RACE,12 (d) Ref. 20, (e) Ref. 24, (f) Ref. 25, (g) Ref. 35, (h) Ref. 23, and (i) the proposed approach.

JEI_26_5_053012_f010.png

Figure 11(a) demonstrates an image where contrast is degenerated due to overexposure. As shown in Figs. 11(b) and 11(c), local contrast is improved slightly by MSRCR and RACE. The results of Refs. 20 and 24 are illustrated in Figs. 11(d) and 11(e) where the enhancing effects are not visually obvious. Consequently, the CQE and EMEC values of Figs. 11(b)11(e) are not much larger than Fig. 11(a). Compared with Figs. 11(d) and 11(e), the luminance of the last four images in Fig. 11 is proper. However, the colors of Figs. 11(f) and 11(g) are depressed due to their color-reconstruction mechanism and tend to dim. In Fig. 11(h), local contrast still needs to be improved, such as the contrast between the sky and clouds. Figure 11(i) shows the result of the proposed approach, and the image shows clear details and good contrast without color distortion. It is shown that the CQE values of Figs. 11(g) and 11(i) are comparative, and Fig. 11(g) has larger EMEC value than Fig. 11(i). This is because that Ref. 35 dimmed the image obviously and obtains good local contrast in Fig. 11(g). However, image color is also dimmed excessively in Fig. 11(g) and global contrast is sacrificed. For instance, Figs. 12(a) and 12(b) are cropped from Figs. 11(g) and 11(i), respectively. We can see that the contrast in Fig. 12(b) is better.

Fig. 11

Experimental results on an overexposure image. (a) Original color image, and others are the results of (b) MSRCR,10,38 (c) RACE,12 (d) Ref. 20, (e) Ref. 24, (f) Ref. 25, (g) Ref. 35, (h) Ref. 23, and (i) the proposed approach.

JEI_26_5_053012_f011.png

Fig. 12

Partial regions, which are cropped from Figs. 11(g) and 11(i), respectively.

JEI_26_5_053012_f012.png

3.3.

Enhancing Results for Images with Both Underexposure and Overexposure

Figure 13(a) shows an image with “a car in the sunset,” and is gained by courtesy of Greenspun.40 The car is underexposed and the sunset area is overexposed. Output of MSRCR is given in Fig. 13(b) where details of the car are revealed clearly. However, the forehead of the car suffers from halo artifacts. As shown in Fig. 13(c), RACE promotes image details in the dark regions, whereas image color is corrupted because RGB channels are treated separately. The output of Ref. 20 is illustrated in Fig. 13(d) where the sunset area is degenerated because Naka–Rushton formula always increases input intensities. Figure 13(e) shows the result of Ref. 24, and global details have been improved effectively. However, luminance of original dark areas has been increased excessively, such as the bottom of the car. The result of Ref. 25 is given in Fig. 13(f) where image luminance has been adapted successfully. However, image contrast still needs to be enhanced. For example, the contrast between sky and cloud is rather inferior to the original image in Fig. 13(a). In Fig. 13(g), image luminance is moderate, but original dark regions under the car are exaggerated. In addition, global image color is distorted and slightly tends to red, such as the sky. Fig. 13(h) shows the output of Ref. 23, and it shows clear details and good contrast. However, image color is distorted because Ref. 23 applied SNRF formula to RGB channels in parallel. For instance, the original red car slightly turns blue. Output of the proposed approach is illustrated in Fig. 13(i). Compared with the previous results in Fig. 13, Fig. 13(i) shows clear details and good contrast without halo artifacts and color distortion. Moreover, the CQE value of Fig. 13(i) is larger than other images in Fig. 13. In addition, Fig. 13(i) has the second-largest EMEC value among the images in Fig. 13, and its EMEC value is a little smaller than Fig. 13(d). We can notice that the clouds around the “sun set” area has been mainly washed out in Fig. 13(d).

Fig. 13

Experimental results on an image with both underexposure and overexposure. (a) original color image, and others are the results of (b) MSRCR,10,38 (c) RACE,12 (d) Ref. 20, (e) Ref. 24, (f) Ref. 25, (g) Ref. 35, (h) Ref. 23, and (i) the proposed approach.

JEI_26_5_053012_f013.png

Figure 14(a) shows a scene that contains a spacecraft. The bottom of the spacecraft is dark, and details of other areas are slightly washed out due to overexposure. Output of MSRCR is given in Fig. 14(b), and it shows good global contrast. However, the original dark regions suffer from halo artifacts. For instance, edges of the original dark bottom of the spacecraft are still dark and other regions of the bottom are revealed. Figure 14(h) shows the result of an alpha-rooting algorithm,26 and it has relatively large CQE value. Among the images in Fig. 14, the outputs of Ref. 35 and the proposed algorithm are more satisfying. In detail, Fig. 14(g) has the largest CQE value, and Fig. 14(i) has the largest EMEC value. For detailed comparison, partial regions of Figs. 14(g) and 14(i) are illustrated in Fig. 15. It is shown in Fig. 15(b) that the bottom of the spacecraft has been reviewed more clearly by the proposed algorithm than Ref. 35.

Fig. 14

Experimental results on an image with both underexposure and overexposure. (a) original color image, and others are the results of (b) MSRCR,10,38 (c) RACE,12 (d) Ref. 20, (e) Ref. 24, (f) Ref. 25, (g) Ref. 35, (h) Ref. 26, which using alpha-rooting method, and (i) the proposed approach.

JEI_26_5_053012_f014.png

Fig. 15

Partial regions that are cropped from Figs. 14(g) and 14(i), respectively.

JEI_26_5_053012_f015.png

3.4.

Contrast Enhancement for Images with Normal Exposure

Performance of the proposed approach and several algorithms on under or overexposure images have been compared in previous sections. In fact, when dealing with an image, image processing algorithms do not know which exposure problem the input image has. Consequently, even if the input image has proper luminance, it will be modified as well. Therefore, in this section, images with proper luminance are utilized as the input images show the performances of different image-enhancement algorithms.

A colorful scenery taken under natural light is shown in Fig. 16(a) where image luminance is proper and local contrast needs to be enhanced. As shown in Figs. 16(b) and 16(c), MSRCR and RACE improve local contrast slightly and obtains natural color. Correspondingly, the EMEC values of Figs. 16(b) and 16(c) are larger than Fig. 16(a). The results of Refs. 20 and 24 are shown in Figs. 16(d) and 16(e), respectively. However, local and global contrast of Figs. 16(d) and 16(e) is even inferior to the original image in Fig. 16(a). Therefore, the CQE and EMEC values of Figs. 16(d) and 16(e) are small. The algorithm in Ref. 25 improves image contrast effectively, and the output is given in Fig. 16(f). However, image color in Fig. 16(f) is slightly dim, such as the distant forest at the top-left part. The result of Ref. 35 is given in Fig. 16(g), which has comparable contrast with Fig. 16(c). The output of Ref. 23 is shown in Fig. 16(h) where image contrast is unsatisfactory and image color turns slightly gray. This is because that Ref. 23 implements SNRF for RGB channels separately and the dynamic rang is compressed seriously by SNRF for images with normal luminance. Figure 16(i) gives the result of the proposed algorithm. Visually, it shows better contrast and more vivid color when compared with Figs. 16(b)16(h). Moreover, the CQE and EMEC values of Fig. 16(i) are much larger than other images in Fig. 16, and this confirms the effectiveness of the proposed approach.

Fig. 16

Experimental results on an image with normal exposure. (a) Original color image, and others are the results of (b) MSRCR,10,38 (c) RACE,12 (d) Ref. 20, (e) Ref. 24, (f) Ref. 25, (g) Ref. 35, (h) Ref. 23, and (i) the proposed approach.

JEI_26_5_053012_f016.png

Figure 17(a) shows a blond white woman wearing a red hat and scarf, and the dominant color of the image is red. The result of MSRCR is given in Fig. 17(b), where color distortion occurs. For instance, the hair and face, which are surrounded by red hat and scarf, are slightly green. As illustrated in Fig. 17(c), RACE improves image contrast slightly. Consequently, the CQE and EMEC values of Figs. 17(b) and 17(c) are not quite larger than Fig. 17(a). The results of Refs. 20 and 24 are given in Figs. 17(d) and 17(e), and both of the contrast in these two images need to be improved. As shown in Fig. 17(f), the method in Ref. 25 enhances contrast effectively. However, Fig. 17(f) tends to darken slightly, such as the eyes and red scarf. Figure 17(g) exhibits the output of Ref. 35, and it is rather dark globally. Figure 17(h) shows the result of Ref. 23, and the image color is washed out globally due to the separate treatment of RGB channels. Output of the proposed approach is given in Fig. 17(i). According to the CQE and EMEC measures, the proposed algorithm produces better result than other methods that are involved in Fig. 17. In detail, compared with the previous results in Figs. 17(b)17(f), the proposed output shows better contrast and color. For example, the face in Fig. 17(i) is much clearer, and the textures on the hat and scarf are enhanced more effectively than others.

Fig. 17

Experimental results on an image with normal exposure. (a) Original color image, and others are the results of (b) MSRCR,10,38 (c) RACE,12 (d) Ref. 20, (e) Ref. 24, (f) Ref. 25, (g) Ref. 35, (h) Ref. 23, and (i) the proposed approach.

JEI_26_5_053012_f017.png

4.

Conclusion

In this work, we presented a local enhancement approach for nonuniform illumination images where details are corrupted by underexposure or overexposure. The proposed approach modifies the luminance component of an image to light up underexposure and darken overexposure. First, to estimate the luminance component, pixel-wise JND values are integrated with the Gaussian filter to preserve edges while smoothing the Y channel in YCbCr space. Then, to discriminate between underexposure and overexposure, a pixel-wise demarcation is devised based on local and global luminance levels. For luminance modification, SNRF is employed to increase underexposure pixels and decrease overexposure pixels. Next, to reconstruct a natural color image, an exponential technique is formulated to combine the modified luminance with the original RGB components. Finally, to improve local contrast that is prone to be degenerated through luminance modification, a local-image-dependent exponential method is designed and applied to the reconstructed color image.

To validate the effectiveness of the proposed approach, experimental tests were made on four types of images: underexposure images, overexposure images, images with both underexposure and overexposure, and images with normal exposure. Moreover, comparisons were made between the proposed method and other solutions, including retinex-based algorithms (MSRCR,10 RACE12), some recent algorithms24,25,35 for tone mapping, an alpha-rooting method26 and algorithms20,23 related to SNRF. Comparisons between experimental results show that the proposed algorithm has the merit of good contrast and vivid color for enhancing nonuniform illumination images. In addition, comparisons also demonstrate that the proposed algorithm enhances contrast more effectively for images with normal exposure.

Acknowledgments

The project was supported by the funding from the National Natural Science Foundation of China (Grant Nos. 61502145, 61300122, and 61602065). This work was also supported in part by the Fundamental Research Funds for the Central Universities under Grant No. 2017B42214. We are very grateful to E. Provenzi and M. Fierro for sending us the code of RACE. We would also like to thank Laurence Meylan, Shuhang Wang, Sangkeun Lee, and Yuecheng Li for making their codes available online.

References

1. 

E. Land and J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am., 61 (1), 1 –11 (1971). http://dx.doi.org/10.1364/JOSA.61.000001 Google Scholar

2. 

E. Provenzi et al., “Mathematical definition and analysis of the retinex algorithm,” J. Opt. Soc. Am. A, 22 (12), 2613 –2621 (2005). http://dx.doi.org/10.1364/JOSAA.22.002613 Google Scholar

3. 

D. Marini and A. Rizzi, “A computational approach to color adaptation effects,” Image Vision Comput., 18 (13), 1005 –1014 (2000). http://dx.doi.org/10.1016/S0262-8856(00)00037-8 Google Scholar

4. 

B. Funt, F. Ciurea and J. J. McCann, “Retinex in Matlab,” J. Electron. Imaging, 13 (1), 48 –57 (2004). http://dx.doi.org/10.1117/1.1636761 Google Scholar

5. 

E. Provenzi et al., “Random spray retinex: a new retinex implementation to investigate the local properties of the model,” IEEE Trans. Image Process., 16 (1), 162 –171 (2007). http://dx.doi.org/10.1109/TIP.2006.884946 Google Scholar

6. 

G. Gianini, A. Manenti and A. Rizzi, “QBRIX: a quantile-based approach to retinex,” J. Opt. Soc. Am., 31 (12), 2663 –2673 (2014). http://dx.doi.org/10.1364/JOSAA.31.002663 Google Scholar

7. 

E. Land, “An alternative technique for the computation of the designator in the retinex theory of color vision,” Proc. Natl. Acad. Sci. U.S.A., 83 (10), 3078 –3080 (1986). http://dx.doi.org/10.1073/pnas.83.10.3078 Google Scholar

8. 

D. J. Jobson, Z. Rahman and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Process., 6 (3), 451 –462 (1997). http://dx.doi.org/10.1109/83.557356 Google Scholar

9. 

D. J. Jobson, Z. Rahman and G. A. Woodell, “A multi-scale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process., 6 (7), 965 –976 (1997). http://dx.doi.org/10.1109/83.597272 Google Scholar

10. 

Z. Rahman, D. J. Jobson and G. A. Woodell, “Retinex processing for automatic image enhancement,” J. Electron. Imaging, 13 (1), 100 –110 (2004). http://dx.doi.org/10.1117/1.1636183 Google Scholar

11. 

A. Rizzi, C. Gatta and D. Marini, “A new algorithm for unsupervised global and local color correction,” Pattern Recognit. Lett., 24 (11), 1663 –1677 (2003). http://dx.doi.org/10.1016/S0167-8655(02)00323-9 Google Scholar

12. 

E. Provenzi et al., “A spatially variant white-patch and gray-world method for color image enhancement driven by local contrast,” IEEE Trans. Pattern Anal. Mach. Intell., 30 (10), 1757 –1770 (2008). http://dx.doi.org/10.1109/TPAMI.2007.70827 Google Scholar

13. 

Y. Kim, “Enhancement using brightness preserving bihistogram equalization,” IEEE Trans. Consum. Electron., 43 (1), 1 –8 (1997). http://dx.doi.org/10.1109/30.580378 Google Scholar

14. 

X. Wu, “A linear programming approach for optimal contrast-tone mapping,” IEEE Trans. Image Process., 20 (5), 1262 –1272 (2011). http://dx.doi.org/10.1109/TIP.2010.2092438 Google Scholar

15. 

L. Tao, R. Tompkins and V. K. Asari, “An illuminance-reflectance model for nonlinear enhancement of color image,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 159 –167 (2005). Google Scholar

16. 

L. Meylan and S. Susstrunk, “High dynamic range image rendering with a retinex-based adaptive filter,” IEEE Trans. Image Process., 15 (9), 2820 –2830 (2006). http://dx.doi.org/10.1109/TIP.2006.877312 Google Scholar

17. 

R. Schettini et al., “Contrast image correction method,” J. Electron. Imaging, 19 (2), 023005 (2010). http://dx.doi.org/10.1117/1.3386681 Google Scholar

18. 

C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proc. of the IEEE Int. Conf. on Computer Vision, 839 –846 (1998). http://dx.doi.org/10.1109/ICCV.1998.710815 Google Scholar

19. 

A. Choudhury and G. Medioni, “Color contrast enhancement for visually impaired people,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 33 –40 (2010). http://dx.doi.org/10.1109/CVPRW.2010.5543571 Google Scholar

20. 

L. Meylan, D. Alleysson and S. Susstrunk, “Model of retinal local adaption for the tone mapping of color filter array images,” J. Opt. Soc. Am. A, 24 (9), 2807 –2816 (2007). http://dx.doi.org/10.1364/JOSAA.24.002807 Google Scholar

21. 

K. I. Naka and W. A. H. Rushton, “S-potentials from luminosity units in the retina of fish (Cyprinidae),” J. Physiol., 185 (3), 587 –599 (1966). http://dx.doi.org/10.1113/jphysiol.1966.sp008003 Google Scholar

22. 

Y. Wang and Y. Luo, “Adaptive color contrast enhancement for digital images,” Opt. Eng., 50 (11), 117006 (2011). http://dx.doi.org/10.1117/1.3655500 Google Scholar

23. 

Y. Wang and Y. Luo, “Balanced color contrast enhancement for digital images,” Opt. Eng., 51 (10), 107001 (2012). http://dx.doi.org/10.1117/1.OE.51.10.107001 Google Scholar

24. 

S. Wang et al., “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process., 22 (9), 3538 –3548 (2013). http://dx.doi.org/10.1109/TIP.2013.2261309 Google Scholar

25. 

Y. Shin, S. Jeong and S. Lee, “Efficient naturalness restoration for non-uniform illumination images,” IET Image Process., 9 (8), 662 –671 (2015). http://dx.doi.org/10.1049/iet-ipr.2014.0437 Google Scholar

26. 

A. M. Grigoryan, J. Jenkinson and S. S. Agaian, “Quaternion Fourier transform based alpha-rooting method for color image measurement and enhancement,” Signal Process., 109 (2015), 269 –289 (2015). http://dx.doi.org/10.1016/j.sigpro.2014.11.019 Google Scholar

27. 

A. M. Grigoryan, A. John and S. S. Agaian, “Color image enhancement of medical images using alpha-rooting and zonal alpha-rooting methods on 2-D QDFT,” Proc. SPIE, 10136 1013618 (2017). http://dx.doi.org/10.1117/12.2254889 Google Scholar

28. 

K. A. Panetta, E. J. Wharton and S. S. Agaian, “Human visual system based image enhancement and measure of image enhancement,” IEEE Trans. Syst. Man Cybern. Part A, 38 (1), 174 –188 (2008). http://dx.doi.org/10.1109/TSMCB.2007.909440 Google Scholar

29. 

K. A. Panetta et al., “Parameterized logarithmic framework for image enhancement,” IEEE Trans. Syst. Man Cybern. Part B, 41 (2), 460 –473 (2011). http://dx.doi.org/10.1109/TSMCB.2010.2058847 Google Scholar

30. 

M. Hajinoroozi, A. Grigoryan and S. S. Agaian, “Image enhancement with weighted histogram equalization and heap transforms,” in Proc. of World Automation Congress (WAC), 1 –6 (2016). http://dx.doi.org/10.1109/WAC.2016.7582992 Google Scholar

31. 

N. Jayant, J. Johnston and R. Safranek, “Signal compression based on models of human perception,” Proc. IEEE, 81 (10), 1385 –1422 (1993). http://dx.doi.org/10.1109/5.241504 Google Scholar

32. 

W. Lin, “Computational models for just-noticeable difference,” Digital Video Image Quality and Perceptual Coding, 281 –303 CRC Press, Boca Raton, Florida (2006). Google Scholar

33. 

X. Yang et al., “Just-noticeable-distortion profile with nonlinear additivity model for perceptual masking in color images,” in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP), 609 –612 (2003). http://dx.doi.org/10.1109/ICASSP.2003.1199548 Google Scholar

34. 

C. H. Chou and Y. C. Li, “A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile,” IEEE Trans. Circuits Syst. Video Technol., 5 (6), 467 –476 (1995). http://dx.doi.org/10.1109/76.475889 Google Scholar

35. 

Y. C. Li et al., “Saliency guided naturalness enhancement in color images,” Optik, 127 (3), 1326 –1334 (2016). http://dx.doi.org/10.1016/j.ijleo.2015.07.177 Google Scholar

36. 

K. Panetta, L. Bao and S. Agaian, “A human visual ‘no-reference’ image quality measure,” IEEE Instrum. Meas. Mag., 19 (3), 34 –38 (2016). http://dx.doi.org/10.1109/MIM.2016.7477952 Google Scholar

37. 

K. Panetta, C. Gao and S. Agaian, “No reference color image contrast and quality measures,” IEEE Trans. Consum. Electron., 59 (3), 643 –651 (2013). http://dx.doi.org/10.1109/TCE.2013.6626251 Google Scholar

38. 

“Retinex image enhancement algorithm patented by NASA and TruView Imaging Co., 2003,” (2010) http://www.truview.com/ Google Scholar

39. 

P. V. Gehler et al., “Bayesian color constancy revisited,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, 1 –8 (2008). Google Scholar

40. 

“Photography,” (2010) http://philip.greenspun.com/ Google Scholar

Biography

Yanfang Wang is a lecturer at the College of Computer and Information, Hohai University, China. She received her BE degree from Northwestern Polytechnical University in 2008 and her PhD in control science and engineering from Tsinghua University in 2014. Her current research interests include color image enhancement, color constancy, image filtering, and color-image quality assessment.

Qian Huang is an associate professor at the College of Computer and Information, Hohai University, China. He received his BS degree from Nanjing University in 2003 and his PhD in computer science and technology from Graduate University of Chinese Academy of Sciences in 2010. His current research interests include video processing, cloud computing, and machine learning.

Jing Hu is a lecturer at the Department of Computer Science, Chengdu University of Information Technology. She received her BE degree from the University of Electronic Science and Technology of China in 2009 and her PhD in control science and engineering from Tsinghua University in 2015. Her current research interests include image superresolution, image denoising, image enhancement, and magnetic resonance imaging.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Yanfang Wang, Qian Huang, and Jing Hu "Adaptive enhancement for nonuniform illumination images via nonlinear mapping," Journal of Electronic Imaging 26(5), 053012 (19 September 2017). https://doi.org/10.1117/1.JEI.26.5.053012
Received: 28 February 2017; Accepted: 24 August 2017; Published: 19 September 2017
Lens.org Logo
CITATIONS
Cited by 7 scholarly publications and 1 patent.
Advertisement
Advertisement
KEYWORDS
RGB color model

Gaussian filters

Image enhancement

Image filtering

Visualization

Digital filtering

Detection and tracking algorithms

Back to Top