Open Access
12 August 2016 Transmission map estimation of weather-degraded images using a hybrid of recurrent fuzzy cerebellar model articulation controller and weighted strategy
Jyun-Guo Wang, Shen-Chuan Tai, Cheng-Jian Lin
Author Affiliations +
Abstract
This study proposes a hybrid of a recurrent fuzzy cerebellar model articulation controller (RFCMAC) and a weighted strategy for solving single-image visibility in a degraded image. The proposed RFCMAC model is used to estimate the transmission map. The average value of the brightest 1% in a hazy image is calculated for atmospheric light estimation. A new adaptive weighted estimation is then used to refine the transmission map and remove the halo artifact from the sharp edges. Experimental results show that the proposed method has better dehazing capability compared to state-of-the-art techniques and is suitable for real-world applications.

1.

Introduction

Weather conditions can severely limit visibility in outdoor scenes. In such cases, atmospheric phenomena such as fog and haze will significantly degrade visibility in the captured scene. Since visibility is dependent on the air, the amount of particles in the air will affect image visibility. This phenomenon is generally composed of water droplets or particles and cannot be ignored. Both the absorption and scattering of light by particles and gases in the atmosphere cause the visibility to decrease, whereas the scattering of particulate in visibility causes more serious damage than the absorption of light. As a result, distant object and part scenes are not visible. That is, the image loses contrast and color fidelity, and the visual quality of the scene is reduced. In a visual sense, the quality of the degraded image is unacceptable. Therefore, a simple and effective image scene recovery method is essential. Image dehazing is a challenging problem, and image recovery technology has attracted the attention of many researchers. The low visibility in hazy images affects the accuracy of computer vision techniques, such as object detection, face tracking, license plate recognition, satellite imaging, and so on, as well as multimedia devices, such as surveillance systems and advanced driver assistance systems. Hence, haze removal techniques are important for improving the visibility of images. Restoring hazy images is a particularly challenging case that requires specific strategies. Therefore, widely varying methods have emerged to solve this problem. In recent years, enhancing images represents a fundamental task in many image processing and vision applications. Proposed strategies for enhancing the visibility of a degraded image include the following.

The first type is the nonmodule method, such as histogram equalization,1 Retinex theory,2 wavelet transform,3 and gamma correction curve.4 However, the shortcomings of these methods are that they seriously affect the clear region and also keep color fidelity less effectively.

The second type is the module method, which depends on the physical mode. Compared to the nonmodule method, these methods achieve better dehazing results by modeling scattering and absorption and by using multiple different atmospheric conditions in input images, such as scene depth,5,6 multiple images,79 polarization angles,10,11 and geometry models.12,13 Narasimhan and Nayar7,8 developed an interactive depth map for removing weather effects, but their method had limited effectiveness. Kopf et al.13 presented a novel deep photo system for using prior knowledge of the scene geometry when browsing and enhancing photos. However, the method required multiple images or additional information to get a better estimate of scattering and absorption, which limited its applications. Hautière et al.12 designed a method using weather conditions and a priori structure of a scene to restore the image contrast for vehicle vision systems.

A novel technique developed in Refs. 10, 14, and 15 exploited the partially polarized properties of airlight. The haze effect was estimated by using different angles of polarized filters to analyze the resulting images of the same scene. In other words, calculating the difference among these images enabled the use of the magnitude of polarization to estimate haze light components. Because the polarization light is not the major degradation factor, these methods have less robustness for scenes with dense haze.

Another recently developed strategy used a module and a single hazy image as input information. This approach has recently become a popular way of eliminating image haze by different strategies.1620 Roughly, these methods can be categorized as contrast-based and statistical approaches. An example of a contrast-based approach is the Tan17 method. In this case, the image restoration maximizes the local contrast while limiting the image intensity to be less than the global atmospheric light value. Tarel and Hautière19 combined a computationally effective technique with a contrast-based technique. Their method assumed that the depth map must be smooth except along edges with large depth jumps. The second category of statistical approaches includes the technique presented in Fattal,16 which employs a graphical model to solve ambiguous atmospheric light color and assumes the image shading and scene transmission are partially uncorrelated. According to this assumption, mathematical statistics were utilized to estimate the albedo of a scene and infer the transmission medium. The method provides a physically consistent estimation. However, because the variation of the two functions in Ref. 16 is not obvious, this method requires substantial fluctuation of color information and luminance in the hazy scene. He et al.18 developed a statistical approach for observing the dark channel and for roughly estimating the transmission map. Then, they refined the final depth map by using a relatively computationally expensive matting strategy.21 In this approach, pixels must be found through the entire image, which requires a long computation time. Nishino et al.20 used a Bayesian probabilistic concept by fully leveraging their latent statistical structures to estimate the scene albedo and depth from a single degraded image. A recent study by Gibson and Nguyen22 proposed a new image dehazing method based on the dark channel concept. Unlike the previous dark channel method, their method finds the average of the darkest pixels in each ellipsoid. However, this assumption in Ref. 22 may find several inaccurate pixels for those corresponding to bright objects. Fattal23 derived a local formation model that explains color lines in the context of hazy scenes and used the model to offset lines for recovering the scene transmission. In addition, Ancuti and Ancuti24 also proposed a fusion-based strategy for enhancing white balance and contrast in two original hazy image inputs. In other words, in order to keep the most significant detected features, the inputs in the fusion process are weighted by the specific calculation maps.

Recently, artificial neural networks (ANNs) have been widely used in many different fields. Research topics related to ANNs have proved suitable for many areas, such as control,25,26 identification,27,28 pattern recognition,29,30 equalization,31,32 and image processing.33,34 The cerebellar model articulation controller (CMAC) model proposed by Albus35,36 is usually applied in ANNs. The CMAC model imitates the structure and function of the cerebellum of a human brain and it is similar to a local network. The CMAC model can be viewed as a basis function network that uses plateau basis functions to compute the output of the model for a given input data point. Therefore, only the basis functions assigned to the hypercube covering the input data are needed. In other words, for a given input vector, only a few of the network nodes (or hypercube cells) are active and will effectively contribute to the corresponding network output. Thus, the CMAC has good learning and generalization capabilities. However, the CMAC requires a large amount of memory for solving the problem of the high dimension,37,38 is ineffective for online learning systems,39and has relatively poor function approximation ability.40,41 Another problem is that it is difficult to determine the memory structure, e.g., to adaptively select structural parameters, in the CMAC model.42,43 Recently, several researchers have proposed various solutions for the above problems, including fuzzy membership functions,44 selection of learning parameters,45 topology structure,46 spline functions,47 and fuzzy C-means.48 Fuzzy theory embedded in the CMAC model has been widely discussed. Thus, a fuzzy CMAC called FCMAC49 was proposed. It takes full advantage of the concept of fuzzy theory and combines it with the local generalization feature of the CMAC model.49,50 A recurrent network is embedded in the CMAC model by adding feedback connections with a receptive field cell to the CMAC model,51 which has the advantage of dynamic characteristics (considering past output network information). However, the above-mentioned methods have several drawbacks. For example, the mapping capability of local approximation by hyper-planes is not good enough, and more hypercube cells (rules) are required.

Therefore, this study developed a recurrent fuzzy cerebellar model articulation controller (RFCMAC) model to solve the above problems and to enable applications in widely various fields. A hybrid of the recurrent fuzzy CMAC and weighted strategy is used to process the image dehazing problem. The proposed method provides high-quality images and effectively suppresses halo artifacts. The advantages of the proposed method are as follows:

  • 1. The recurrent structure combines the advantages of local and global feedback.

  • 2. Many studies52,53 have considered only the past states in the recurrent structure, which is insufficient without referring to current states. In other words, the proposed method considers the correlation between past states and current states.

  • 3. Using the proposed method to determine the values of the transmission that map increases accuracy in selecting the average of the brightest 1% of atmospheric light, as atmospheric light.

  • 4. The proposed method applies a weighted strategy to generate a refining transmission map, thereby removing the halo effect.

The rest of this paper is structured as follows. Section 2 discusses the theoretical background of light propagation in such environments. In Sec. 3, we introduce the proposed RFCMAC and weighted strategy for image dehazing. Section 4 presents the experimental results and compares the proposed approach with other state-of-the-art methods. Finally, conclusions are drawn in Sec. 5.

2.

Theory of Light Propagation

Generally, a camera being used to take outdoor photographs obtains an image by the light of the receiving environment, such as the illumination of sunlight, reflecting light from a surface as shown in Fig. 1. Due to absorption and scattering, the light crossing the atmosphere is attenuated and dispersed. In physical terms, the number of suspended particles is low in sunny weather. Thus, the image quality is clear. In contrast, dust and water particles in the air during volatile weather scatter light, which severely degrades image quality. In such degraded circumstances, only 1% of the reflected light reaches the observer, and it causes poor visibility.54 McCartney55 also noted that haze is an atmospheric phenomenon. That is, the clear sky is obscured by dust, smoke, and other dry particles. In the captured image, the haze generates a distinctive gray hue, reducing visibility for the image. Based on the above, the physical theory of a hazy model can be expressed as

Eq. (1)

I(x)=J(x)t(x)+A[1t(x)],
where I is the observed image with haze and x=(x1,x2) denotes the observed RGB colors’ pixel coordinates. In Eq. (1), the hazy model consists of two main components, a direct attenuation and a veiling light (i.e., airlight). J(x) is the light reflected from the surfaces, or the haze-free image; t(x) 0,1 represents the transmission values of reflected light. A is the atmospheric light. The first component J(x)t(x) represents the direct attenuation or the direct transmission of the scene radiance. That is, attenuation results from the interaction between scene radiance and particles during transmission. In other words, it corresponds to the reflected light of the surfaces in the scene and reaches the camera directly without being scattered. The other component A[1t(x)] expresses the real color cast of the scene due to the scattering of atmospheric light. t denotes the amount of light transmission between the observer and the surface. Assuming a homogenous medium, transmission t is, therefore, t(x)=eβd(x), where β is the medium attenuation coefficient and d represents the distance between the observer and the considered surface. Since transmission is inversely proportional to depth, this feature obtains image depth information without additional sensing devices. Therefore, only the transmission map and the color vector of atmospheric light are needed to eliminate the hazing effect in the image.

Fig. 1

Haze imaging model.

OE_55_8_083104_f001.png

3.

Proposed Method

This section presents in detail our proposed method, which uses the RFCMAC model and a weighted strategy to recover scenes from the removal of a hazy image. Figure 2 shows the flowchart of the proposed method, and the details are presented in the following sections.

Fig. 2

Flow diagram of the proposed dehazing algorithm.

OE_55_8_083104_f002.png

3.1.

Estimation of Transmission Map Features Using RFCMAC Model

The transmission map and atmospheric light have important roles in haze removal. Therefore, a good dehazing method with estimation of both the transmission map and the atmospheric light can appropriately process the recovery of a hazy image. Haze, which is generated by light attenuation, depends on the distribution of the number of particles in the air. According to Eq. (1), both the transmission map and the atmospheric light are important factors. Thus, the transmission factor and atmospheric lightness must be improved. This study proposes an RFCMAC model for estimating the transmission map more accurately. The RFCMAC model combines the traditional CMAC model, an interactive feedback mechanism, and a Takagi—Sugeno—Kang (TSK)-type linear function to obtain better solutions. The proposed model also adopts an interactive feedback mechanism, which has the ability to capture critical information from other hypercube cells. The structure of the RFCMAC and associated learning algorithm are presented as follows.

3.1.1.

Structure of the RFCMAC model

The performance of the proposed RFCMAC model is enhanced by using an interactive feedback mechanism in the temporal layer and a TSK-type linear function in the subsequent layer. Figure 3 shows the six-layered structure of the RFCMAC model. The structure realizes a similar fuzzy IF–THEN rule (hypercube cell).

Fig. 3

Structure of the RFCMAC model.

OE_55_8_083104_f003.png

Rule j:

IFx1isA1jandx2isA2jandxiisAijandxNDisANDjTHENyj=j=1Oj(4)(α0j+i=1NDαijxi),
where xi represents the i’th input variables, yj denotes the local output variables, Aij is the linguistic term using the Gaussian membership function in the antecedent part, Oj(4) is the output of the interactive feedback, and α0j+i=1NDαijxi is the basis TSK-type linear function of input variables. The operation functions of the nodes in each layer of the RFCMAC model are described as follows. For the following description, O(l) represents the output of a node in the l’th layer.

Layer 1 (input layer): The layer is used as an input feature vector x=(x1,x2,,  xND), and the inputs are crisp values. This layer does not require adjustments of weight parameters. Each node need only directly transmit input values to the next layer. The corresponding outputs are calculated as

Eq. (2)

Oi(1)=ui(1),andui(1)=xi.

Layer 2 (fuzzification layer): The layer performs a fuzzification operation and uses a Gaussian membership function to calculate the firing degree of each dimension. The Gaussian membership function is defined as follows:

Eq. (3)

Oij(2)=exp[(ui(1)mij)2σij2],andui(2)=Oi(1),
where mij and σij denote the mean and variance of the Gaussian membership function, respectively.

Layer 3 (spatial firing layer): Each node of this layer receives the firing strength of the associated hypercube cell by the node of a fuzzy set in layer 2. All layer two outputs are collected in layer three. Specifically, each node performs an algebraic product operation on inputs to generate spatial firing strength αj. The layer determines the number of hypercube cells in the current iteration. For each inference node, the output function can be computed as follows:

Eq. (4)

Oj(3)=iNDuij(3),anduij(3)=Oij(2),
where Π denotes product operation.

Layer 4 (temporal firing layer): Each node is a recurrent hypercube cell node, including the internal feedback (self-loop) and external interactive feedback loop. The output of the recurrent hypercube cell node depends on both the current spatial and previous temporal firing strengths. That is, each node refers to relative information from itself and other nodes. Because the self-feedback of the hypercube cell node is not sufficient to represent the all necessary information, the proposed model refers to relative information not only from the local source (node’s feedback from itself) but also from the global source (feedback from other nodes). The linear combination function of the temporal firing strength is described as follows:

Eq. (5)

Oj(4)=k=1[λkjq·Ok(4)(t1)]+(1γjq)·uj(4),anduj(4)=Oj(3),
where λkjq represents recurrent weights and determines the compromise ratio between the current and previous inputs to the network outputs. γjq=k=1NAλkjq and λkjq=RkjqNA(0Rkjq1) denote the interactive weights of the hypercube cells from itself and other nodes. Rkj  q is a connection weight from the k’th node to the j’th node and is a random value between 0 and 1. NA is the number of hypercube cells. Therefore, the compromise ratio between the current and previous inputs is between 0 and 1.

Layer 5 (consequent layer): Each node is a function of a linear combination of input variables in this layer. The equation is expressed as

Eq. (6)

Oj(5)=Oj(4)(a0j+i=1NDaijxi).

Layer 6 (output layer): This layer uses the centroid of area (COA) approach to defuzzify a fuzzy output into a scalar output. Then the actual output y is derived as follows:

Eq. (7)

y=j=1NAOj(4)(a0j+i=1NDaijxi)j=1NAOj(4).

3.1.2.

Learning algorithm of the RFCMAC model

The proposed learning algorithm combines structure learning and parameter learning when constructing the RFCMAC model. Figure 4 shows a flowchart of the proposed learning algorithm. First, the self-constructing input space partition in structure learning is based on the degree measure used to appropriately determine the various distributions of the input training data. In other words, the firing strength in structure learning is used to determine whether a new fuzzy hypercube cell (rule) should be added to satisfy the fuzzy partitioning of input variables. Second, the parameter learning procedure performs the back propagation algorithm by minimizing a given cost function to adjust parameters. The RFCMAC model initially has no hypercube cell nodes except the input–output nodes. According to the reception of online incoming training data in the structure and parameter learning processes, the nodes from layer 2 to layer 5 are created automatically. Parameters Rkjq and aij in the initial model are randomly generated between 0 and 1.

Fig. 4

Flowchart of the proposed structure and parameter learning.

OE_55_8_083104_f004.png

Structure learning algorithm

Generally, the main purpose of structured learning is to determine whether a new hypercube cell should be extracted from the training data. For each incoming pattern xi, the firing strength of the spatial firing layer can be defined as the degree to which the incoming pattern belongs to the corresponding cluster. The entropy measure is used to estimate the distance between each data point and each membership function. Entropy values between data points and current membership functions were calculated to determine whether to add a new hypercube cell. The entropy measure can be calculated using the firing strength from uij(3) as follows:

Eq. (8)

EMj=i=1NDijlog2Dij,
where Dij=exp(uij(2)1) and EMj[0,1]. Based on Eq. (9), the criterion for the degree measure is used to estimate and generate a new hypercube cell of new incoming data x=(x1,x2,,xND). The maximum entropy measure is calculated as follows:

Eq. (9)

EMmax=max1jNLEMj,
where NL is the number of the hypercube cell and EM¯[0,1] is a prespecified threshold. In order to limit the number of hypercube cells in the proposed RFCMAC model, the threshold value will decay during the learning process. A low threshold leads to the learning of coarse clusters (i.e., a low number of hypercube cells are generated), whereas a high threshold leads to the learning of fine clusters (i.e., a high number of hypercube cells are generated). Therefore, the selection of the threshold value EM¯ critically affects the simulation results. That is, EM¯ determines whether the proper new hypercube cell is generated. Therefore, if EMmaxEM¯, then a new hypercube cell is generated. Otherwise, the hypercube cell is not added.

Parameter learning algorithm

Five parameters of the model are entered in the learning algorithm and optimized based on the training data. The parameter learning occurs concurrently with the structure learning. For each piece of incoming data, five parameters (i.e., mij, σij, a0, aij, and λkjq) are tuned in the RFCMAC model when the hypercube cells are newly generated or originally existed. Here, the gradient descent method is used to adjust the parameters of the receptive field functions and the TSK-type function. To clarify, consider the single-output case. The goal of the minimizing cost function E is described as

Eq. (10)

E(t)=12[yd(t)y(t)]2,
where yd(t) denotes the desired output and y(t) is the model output for each discrete time t. In each training cycle, from the starting input variables to the activity of the model output, y(t) are calculated by a feed-forward pass operation. According to Eq. (10), the error is used to regulate the weighted vector of the proposed RFCMAC model in a given number of training cycles. The well-known learning method of the backpropagation algorithm can be simplified as follows:

Eq. (11)

W(t+1)=W(t)+W(t)=ΔW(t)+[ηE(t)W(t)],
where η and W represent the learning rate and the free parameters, respectively. η denotes the pace factor for the learning rate in the search space. A low value may lead to a local optimal solution, whereas a high value leads to premature convergence that cannot obtain a better optimal solution. Therefore, the initial settings for α¯ and η are based on experience estimation. According to Eq. (10), with respect to an arbitrary weight vector, W is calculated by

Eq. (12)

E(t)W=e(t)y(t)W.

The corresponding antecedent and consequent parameters of the RFCMAC model are then adjusted using the chain rule to perform the error term recursive operation. With the RFCMAC model and the cost function as defined in Eq. (10), the update rule for aij can be derived as

Eq. (13)

aij(t+1)=aij(t)+Δaij(t),
where

Eq. (14)

aij(t)=η·Eaij=η·Ey·yOj(5)·Oj(5)aij.

The equations used to update the recurrent weight parameter λkjq cell are

Eq. (15)

λkjq(t+1)=λkjq(t)+Δλkjq(t),
where

Eq. (16)

Δλkjq(t)=η·Eλkjq=η·Ey·yOj(5)·Oj(5)Oj(4)·Oj(4)λkjq=η·e·(a0j+i=1NDaijxi)j=1NLOj(4)j=1NLOj(4)(a0j+i=1NDaijxi)(jNLOj(4))2[Oj(4)(t1)αj],
where η represents the learning rate of the recurrent λ for the fuzzy weight functions and is set between 0 and 1, and e denotes the error between the desired output and actual output, i.e., ydy.

mij and σij represent the mean and variance of the receptive field functions, respectively. The adjustable parameters of the receptive field functions are calculated by

Eq. (17)

mij(t+1)=mij(t)+mij(t),
and

Eq. (18)

σij(t+1)=σij(t)+σij(t),
where

Eq. (19)

Δmij=η·Ey·yOj(5)·Oj(5)Oj(4)·Oj(4)Oj(3)·Oj(3)Oj(2)·Oj(2)mij=η·e·(a0j+i=1NDaijxi)j=1NLOj(4)j=1NLOj(4)(a0j+i=1NDaijxi)(jNLOj(4))2·(1γjq)·αj·2(ui(1)mij)σij2,
and

Eq. (20)

Δσij=η·Ey·yOj(5)·Oj(5)Oj(4)·Oj(4)Oj(3)·Oj(3)Oj(2)·Oj(2)σij=η·e·(a0j+i=1NDaijxi)j=1NLOj(4)j=1NLOj(4)(a0j+i=1NDaijxi)(jNLOj(4))2·(1γjq)·αj·2(ui(1)mij)2σij3,
where i denotes the i’th input dimension for i=1,2,,n, and j denotes the j’th hypercube cell.

3.2.

Weighted Strategy for Adaptively Refining the Transmission Map

In the real world, the transmission is not always constant within a window, especially around the contour of an object. In the inconstant regions, the image of the recovery scene generates some halos and block artifacts. The proposed solution is to use a pixel-window ratio (PWR) method to detect the possible regions of the halo artifact in the recovered scene and to use an adaptive weighting technique to mitigate the artifact. The PWR is defined as the ratio of the pixel itself and the 7×7 mask of the window. The PWR is derived by

Eq. (21)

PWR=PTMWTM,
where the numerator is the minimum channel by 1×1 mask for RGB color space and the denominator is the window transmission map (WTM) by 7×7 mask. A PWR value very close to 1 means that the transmission within the WTM is nearly constant. Although the halo situation cannot occur, the relative color saturation in the image is very high. In contrast, if the value of PWR is far >1, this means that the transmission within the window is inconstant and the halo artifact will occur. However, excessive color saturation is not a problem. Although the halo artifact region can be found by the value of PWR, the main problem is how to mitigate artifacts from these regions. The proposed solution to this problem is to use a weighted strategy approach to improve refinement of the transmission map and to mitigate the halo artifact. The weighted strategy approach is defined as follows:

Eq. (22)

t={ω×[(1αPWR)×PTM+αPWR×WTM],if  PWR>  Tupperω×[(1βPWR)×PTM+βPWR×WTM],if  Tlower<PWRTupperω×WTM,otherwise
where α and β are the weighting factors for mitigating the artifacts. The range of α and β is set as 0<α<β<1. In Eq. (22), if the PWR value is greater than Tupper, it means that the transmission is greatly different from the WTM, the weighting estimation of the WTM is decreased, and the weighting estimation of the PTM is increased. Therefore, this situation requires a very small weighting factor α to adjust the transmission rapidly so that the halo artifact can be eliminated. If the PWR value is between Tupper and Tlower, this means that the transmission is a little different from the WTM. For this situation, the weighting factor β is greater than the weighting factor α and it is applied to adjust the transmission smoothly. Otherwise, the WTM value is directly used as an estimation value. Parameter values α and β are based on computational analysis of the intensity values associated with the halos. Figure 5(a) shows the original hazy image and Figs. 5(b)5(j) show the results using the different values of α and β. Based on the above computational analysis, weighting factors α and β are set appropriately to improve the quality of the image dehazing.

Fig. 5

(a) Original haze image; (b)–(j) the results using different α and β values, where (b) α=0.1, β=0.1; (c) α=0.1, β=0.5; (d) α=0.1, β=0.9; (e) α=0.5, β=0.1; (f) α=0.5, β=0.5; (g) α=0.5, β=0.9; (h) α=0.9, β=0.1; (i) α=0.9, β=0.5; and (j) α=0.9, β=0.9.

OE_55_8_083104_f005.png

3.3.

Atmospheric Light Estimation

The atmospheric light factor must be carefully selected for effective image dehazing. An incorrectly selected atmospheric light factor will obtain very poor dehazing results. In some situations, many objects are considered atmospheric light, which results in erroneous image restoration. To solve this problem, the proposed solution is to use an average value of the brightest 1% in the transmission t to refine the atmospheric light level. The average value is calculated as follows:

Eq. (23)

Ac=pixelxpixelc|x|,
where A is the atmospheric light and c is the color channel. Figure 6 shows the results of scene radiance recovery.

Fig. 6

Estimation using an average value: (a) original image; (b) estimate of transmission map; (c) image of atmospheric light; and (d) scene radiance recovery.

OE_55_8_083104_f006.png

3.4.

Image Recovery

This section describes how both atmospheric light and transmission features in Secs. 3.2 and 3.3 are used as input factors in scene recovery. The scene radiance recovery step converts Eq. (1) into Eq. (24) to obtain the dehazed images. Therefore, scene J can be recovered as follows:

Eq. (24)

J(x)=I(x)Amax[t0,t(x)]+A,
where t0 is the lower bound of transmission and is set as 0.15. If a little haze exists in the recovered image, then this image will look more natural.

4.

Results and Discussion

The experiments were performed in the C language on a Pentium(R) i7-3770 CPU @3.20 GHz. The effectiveness and robustness of the proposed method were verified by testing several hazy images, namely, “New York,” “ny12,” “ny17,” “y01,” and “y16”. The proposed approach was also compared with other well-known haze removal methods.13,16,1720,24 Performance testing was divided into three parts: (1) results of removing the halo, (2) assessment of the visual images, and (3) analysis of the quantitative measurement.

4.1.

Results of Removing the Halo

Figure 7 shows the results of removing the halo for different images. In Fig. 7(a), the estimated transmission map is from an input hazy image using the patch size 7×7. Although the dehazing results are good, some block effects (halo artifacts) exist in the blue blocks of Fig. 7(a). The phenomenon is because the transmission is not always a constant value in a patch. In Fig. 7(b), the halo artifacts are suppressed by the proposed method in the red blocks. Therefore, the halo artifacts do not exist using the proposed method.

Fig. 7

Removal of halo artifacts for different images. (a) Halo artifacts and (b) removal of the halo artifacts.

OE_55_8_083104_f007.png

4.2.

Estimation of the Visual Image

Figure 8 shows the comparison results. This figure shows that the dehazing results obtained by the proposed method are better than those of Fattal,16 Tarel and Hautière,19 and Ancuti and Ancuti.24 Additionally, Schechner and Averbuch14 adopted a multi-image polarization-based dehazing method that employs the worst and the best polarization states among the existing image versions. For a comparison with the method developed in Schechner and Averbuch,14 we processed only one input used in that study.r14 The dehazing results obtained by the proposed method are superior to those of Schechner and Averbuch.14

Fig. 8

Comparison of dehazing results using various methods.

OE_55_8_083104_f008.png

Figures 9 and 10 also show the comparison results for the proposed approach and other state-of-the-art methods. Figure 9 shows that, compared with the techniques developed by Tan17 and by Tarel and Hautière,19 the proposed method preserves the fine transitions in the hazy regions and does not generate unpleasing artifacts. Moreover, the techniques of Tan17 and Tarel and Hautière19 produce oversaturated colors. Although the technique developed by Fattal16 obtains good dehazing results, its applications are limited in dense haze situations. The poor performance mainly results from the use of a statistical analysis method that needs to estimate the variance of the depth map. The technique of Kopf et al.13 obtains a good result in the color contrast, but only a little detailed texture is presented in the image. The technique of He et al.18 gets an obvious color difference in some regions. Recently, the technique developed by Nishino et al.20 yields aesthetically pleasing results, but some artifacts are introduced in those regions, which are considered at infinite depth. The method developed by Ancuti and Ancuti24 obtains a natural image, but color differences are visible in some regions, such as objects. The proposed method can effectively perform hazing, halation, and color cast.

Fig. 9

Comparison of dehazing techniques for city scene images: (a) ny12 and (b) ny17.

OE_55_8_083104_f009.png

Fig. 10

Comparison of dehazing techniques for mountain scene images: (a) y01 and (b) y16.

OE_55_8_083104_f010.png

An image of a mountain was also used for comparison with other state-of-the-art methods. Figure 10 shows the dehazing results using various methods. Comparisons showed that the Tan method17 produces oversaturation phenomena and causes color differences and halo artifacts. A good color contrast is obtained by the Fattal16 method, but some differences in detailed textures and color differences are visible. The results of Kopf13 are similar to those of Fattal.16 Though Tarel and Hautière’s19 method has a good detailed texture, the color difference problem is generated. Because of color differences caused by oversaturation, the results obtained by the He et al.18 method are unnatural. The technique developed by Nishino et al.20 obtains a good overall image, but an unnatural phenomenon is visible in clouds in the sky. The technique of Ancuti and Ancuti24 performs well in terms of true color contrast; however, a slight unnatural phenomenon still occurs around the sky. Overall, the results obtained by the proposed method are superior to those of other methods.

4.3.

Quantitative Measurement Results

A real-world quantitative analysis of image restoration is not easy to implement because a standard reference image has not been validated. Therefore, to demonstrate the effectiveness of the proposed algorithm compared to other image dehazing methods such as Tan,17 Fattal,16 Kopf et al.,13 Tarel and Hautière,19 He et al.,18 Nishino et al.,20 and Ancuti and Ancuti,24 this study employs two well-known quantitative metrics for analysis: the indicator assessment of S-CIELAB by Zhang and Wandell56 and the blind measure by Hautière et al.57

The S-CIELAB56 metric is used to estimate color fidelity in visual images because it incorporates the spatial color sensitivity of the eye and evaluates the color contrast between the restored image and the original image. Therefore, it obtains accurate predictions. The value of the color contrast is proportional to the S-CIELAB metric. If the S-CIELAB metric is small, the color contrast value is small; in contrast, if the S-CIELAB metric is large, the color contrast value is large. Table 1 shows the estimation results of color contrast using various methods.

Table 1

Estimation results of color contrast using various methods.

Nameny12ny17y01y16
ΔEΔEΔEΔE
Tan39,39443,47815,65110,244
Fattal20,99315,99726835591
Kopf et al.10,096385358914012
Tarel et al.131512994993066
He et al.146213575116496
Nishino et al.1346134214564780
Ancuti and Ancuti1331154823603301
Proposed128912683082296

The blind measure methodology57 calculates the ratio between the gradient of before and after image restoration. This calculation is based on the concept of visibility, which is commonly used in lighting engineering. This study considers four images for discussing, named as ny12, ny17, y01, and y16. Indicator e represents edges newly visible after restoration, and indicator r¯ represents the mean ratio of the gradients at visible edges. The blind measure is calculated as follows:

Eq. (25)

e=nrnono,
where nr and no are the number of visible edges in the restored image and the original image, respectively

Eq. (26)

r¯=exp[1nrPirlogri],
where r is the set of visible edges in the restored image, Pi is the i’th element of the corresponding set r, and ri denotes the i’th ratio between the gradient of the original image and the restored image.

Table 2 shows the performance of different algorithms with e and r¯. In Table 2, the edge newly visible after restoration (i.e., the e value) of the proposed method is larger than those of other methods,13,1618,20 whereas the r¯ value of the proposed method is smaller than that in the Tan17 and Tarel and Hautière19 methods. However, comparisons of the visual images show that both methods (i.e., Refs. 17 and 19) exhibit oversaturation and color contrast.

Table 2

Performance of different algorithms with e and r¯.

Nameny12ny17y01y16
eeee
Tan0.142.340.062.220.082.280.082.08
Fattal0.061.320.121.560.041.230.031.27
Kopf et al.0.051.420.011.620.091.620.011.34
Tarel et al.0.071.880.011.870.022.090.012.01
He et al.0.061.420.011.650.081.330.061.42
Nishino et al.0.011.810.071.790.111.790.011.29
Ancuti and Ancuti0.021.490.121.540.071.190.181.46
Proposed0.111.720.061.740.221.690.191.82

The computation time of the proposed method was also compared with that of other state-of-the-art techniques. For this comparison, test images with an average size of 600×800 were used. The comparisons showed that the proposed method requires 4.5 s, the method of Tan17 needs >45  s, Fattal16 requires 35 s, the technique of Tarel and Hautière19 needs 8 s, and He et al.18 requires 20 s. Therefore, the proposed method has the shortest computation time.

Based on the above-mentioned analysis and comparison in Secs. 4.14.3, an efficient hybrid of the RFCMAC model and the weighted strategy is proposed for solving halo removal, color contrast enhancement, and computation time reduction.

5.

Conclusions

The hybrid RFCMAC model and weighted strategy developed in this study effectively solve hazy and foggy images. The proposed RFCMAC model performs estimation of the transmission map and accurately selects the average of the brightest 1% of atmospheric light. An adaptively weighted strategy is applied to generate a refined transmission map for removing the halo effect. Experimental results demonstrate the superiority of the proposed method in enhancing color contrast, balancing color saturation, removing halo artifacts, and reducing computation time.

References

1. 

J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. Image Process., 9 (5), 889 –896 (2000). http://dx.doi.org/10.1109/83.841534 IIPRE4 1057-7149 Google Scholar

2. 

Z. Rahman, D. J. Jobson and G. A. Woodell, “Retinex processing for automatic image enhancement,” J. Electron. Imaging, 13 (1), 100 –110 (2004). http://dx.doi.org/10.1117/1.1636183 Google Scholar

3. 

P. Scheunders, “A multivalued image wavelet representation based on multiscale fundamental forms,” IEEE Trans. Image Process., 11 (5), 568 –575 (2002). http://dx.doi.org/10.1109/TIP.2002.1006403 IIPRE4 1057-7149 Google Scholar

4. 

C. O. Ancuti et al., “A fast semi-inverse approach to detect and remove the haze from a single image,” in Proc. of the Asian Conf. on Computer Vision, 501 –514 (2010). Google Scholar

5. 

J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process., 7 (2), 167 –179 (1998). http://dx.doi.org/10.1109/83.660994 IIPRE4 1057-7149 Google Scholar

6. 

K. K. Tan and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A, 18 (10), 2460 –2467 (2001). http://dx.doi.org/10.1364/JOSAA.18.002460 Google Scholar

7. 

S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell., 25 (6), 713 –724 (2003). http://dx.doi.org/10.1109/TPAMI.2003.1201821 ITPIDJ 0162-8828 Google Scholar

8. 

Y. Y. Schechner, S. G. Narasimhan and S. K. Nayar, “Polarization based vision through haze,” Appl. Opt., 42 (3), 511 –525 (2003). http://dx.doi.org/10.1364/AO.42.000511 APOPAI 0003-6935 Google Scholar

9. 

P. S. Pandian, M. Kumaravel and M. Singh, “Multilayer imaging and compositional analysis of human male breast by laser reflectometry and Monte Carlo simulation,” Med. Biol. Eng. Comput., 47 (11), 1197 –1206 (2009). http://dx.doi.org/10.1007/s11517-009-0531-3 MBECDY 0140-0118 Google Scholar

10. 

S. Shwartz, E. Namer and Y. Schechner, “Blind haze separation,” in 2006 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’06), 1984 –1991 (2006). http://dx.doi.org/10.1109/CVPR.2006.71 Google Scholar

11. 

Y. Schechner, S. Narasimhan and S. Nayar, “Instant dehazing of images using polarization,” in Proc. of the 2001 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’01), 325 –332 (2001). http://dx.doi.org/10.1109/CVPR.2001.990493 Google Scholar

12. 

N. Hautière, J. P. Tarel and D. Aubert, “Towards fog-free in-vehicle vision systems through contrast restoration,” in IEEE Conf. on Computer Vision and Pattern Recognition, 1 –8 (2007). http://dx.doi.org/10.1109/CVPR.2007.383259 Google Scholar

13. 

J. Kopf et al., “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph., 27 (5), 1 –10 (2008). http://dx.doi.org/10.1145/1409060 ATGRDF 0730-0301 Google Scholar

14. 

Y. Schechner and Y. Averbuch, “Regularized image recovery in scattering media,” IEEE Trans. Pattern Anal. Mach. Intell., 29 (9), 1655 –1660 (2007). http://dx.doi.org/10.1109/TPAMI.2007.1141 ITPIDJ 0162-8828 Google Scholar

15. 

E. Namer, S. Shwartz and Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express, 17 (2), 472 –493 (2009). http://dx.doi.org/10.1364/OE.17.000472 OPEXFF 1094-4087 Google Scholar

16. 

R. Fattal, “Single image dehazing,” ACM Trans. Graph., 27 (3), (2008). http://dx.doi.org/10.1145/1360612.1360671 Google Scholar

17. 

R. T. Tan, “Visibility in bad weather from a single image,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1 –8 (2008). http://dx.doi.org/10.1109/CVPR.2008.4587643 Google Scholar

18. 

K. He, J. Sun and X. Tang, “Single image haze removal using dark channel prior,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1956 –1963 (2009). http://dx.doi.org/10.1109/CVPR.2009.5206515 Google Scholar

19. 

J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proc. IEEE Int. Conf. Computer Vision, 2201 –2208 (2009). http://dx.doi.org/10.1109/ICCV.2009.5459251 Google Scholar

20. 

K. Nishino, L. Kratz and S. Lombardi, “Bayesian defogging,” Int. J. Comput. Vision, 98 (3), 263 –278 (2012). http://dx.doi.org/10.1007/s11263-011-0508-1 IJCVEQ 0920-5691 Google Scholar

21. 

A. Levin, D. Lischinski and Y. Weiss, “A closed form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell., 30 (2), 228 –242 (2008). http://dx.doi.org/10.1109/TPAMI.2007.1177 ITPIDJ 0162-8828 Google Scholar

22. 

K. Gibson and T. Nguyen, “An analysis of single image defogging methods using a color ellipsoid framework,” EURASIP J. Image Video Process., 2013 (37), (2013). http://dx.doi.org/10.1186/1687-5281-2013-37 Google Scholar

23. 

R. Fattal, “Dehazing using color-lines,” ACM Trans. Graph., 34 (1), 13 (2014). http://dx.doi.org/10.1145/2651362 ATGRDF 0730-0301 Google Scholar

24. 

C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Trans. Image Process., 22 (8), 3271 –3282 (2013). http://dx.doi.org/10.1109/TIP.2013.2262284 IIPRE4 1057-7149 Google Scholar

25. 

C. Xianzhong and K. G. Shin, “Direct control and coordination using neural networks,” IEEE Trans. Syst., Man, Cybern., 23 (3), 686 –697 (1993). http://dx.doi.org/10.1109/21.256542 ISYMAW 0018-9472 Google Scholar

26. 

S. Wu and K. Y. M. Wong, “Dynamic overload control for distributed call processors using the neural network method,” IEEE Trans. Neural Networks, 9 (6), 1377 –1387 (1998). http://dx.doi.org/10.1109/72.728389 ITNNEP 1045-9227 Google Scholar

27. 

T. Yamada and T. Yabuta, “Dynamic system identification using neural networks,” IEEE Trans. Syst., Man, Cybern., 23 (1), 204 –211 (1993). http://dx.doi.org/10.1109/21.214778 Google Scholar

28. 

S. Lu and T. Basar, “Robust nonlinear system identification using neural-network models,” IEEE Trans. Neural Networks, 9 (3), 407 –429 (1998). http://dx.doi.org/10.1109/72.668883 ITNNEP 1045-9227 Google Scholar

29. 

C.A. Perez et al., “Linear versus nonlinear neural modeling for 2-D pattern recognition,” IEEE Trans. Syst., Man, Cybern. A, 35 (6), 955 –964 (2005). http://dx.doi.org/10.1109/TSMCA.2005.851268 Google Scholar

30. 

T. H. Oong and N. A. M. Isa, “Adaptive evolutionary artificial neural networks for pattern classification,” IEEE Trans. Neural Networks, 22 (11), 1823 –1836 (2011). http://dx.doi.org/10.1109/TNN.2011.2169426 ITNNEP 1045-9227 Google Scholar

31. 

S. K. Nair and J. Moon, “Data storage channel equalization using neural networks,” IEEE Trans. Neural Networks, 8 (5), 1037 –1048 (1997). http://dx.doi.org/10.1109/72.623206 ITNNEP 1045-9227 Google Scholar

32. 

C. You and D. Hong, “Nonlinear blind equalization schemes using complex-valued multilayer feedforward neural networks,” IEEE Trans. Neural Networks, 9 (6), 1442 –1455 (1998). http://dx.doi.org/10.1109/72.728394 ITNNEP 1045-9227 Google Scholar

33. 

Y. S. Yang et al., “Automatic identification of human helminth eggs on microscopic fecal specimens using digital image processing and an artificial neural network,” IEEE Trans. Biomed. Eng., 48 (6), 718 –730 (2001). http://dx.doi.org/10.1109/10.923789 IEBEAX 0018-9294 Google Scholar

34. 

L. Ma and K. Khorasani, “Facial expression recognition using constructive feedforward neural networks,” IEEE Trans. Syst., Man, Cybern. B, 34 (3), 1588 –1595 (2004). http://dx.doi.org/10.1109/TSMCB.2004.825930 Google Scholar

35. 

J. S. Albus, “A new approach to manipulator control: the cerebellar model articulation controller (CMAC),” J. Dyn. Syst., Meas., Contr., 97 (3), 220 –227 (1975). http://dx.doi.org/10.1115/1.3426922 Google Scholar

36. 

J. S. Albus, “Data storage in the cerebellar model articulation controller (CMAC),” J. Dyn. Syst., Meas., Contr., 97 228 –233 (1975). http://dx.doi.org/10.1115/1.3426923 Google Scholar

37. 

Z. J. Lee, Y. P. Wang and S. F. Su, “A genetic algorithm based robust learning credit assignment cerebellar model articulation controller,” Appl. Soft Comput., 4 (4), 357 –367 (2004). http://dx.doi.org/10.1016/j.asoc.2004.01.007 Google Scholar

38. 

Y. G. Leu et al., “Compact cerebellar model articulation controller for ultrasonic motors,” Int. J. Innovative Comput., Inf. Control, 6 (12), 5539 –5552 (2010). Google Scholar

39. 

S. F. Su, T. Ted and T. H. Huang, “Credit assigned CMAC and its application to online learning robust controllers,” IEEE Trans. Syst., Man, Cybern., B, 33 (2), 202 –213 (2003). http://dx.doi.org/10.1109/TSMCB.2003.810447 Google Scholar

40. 

J. Wu and F. Pratt, “Self-organizing CMAC neural networks and adaptive dynamic control,” in Proc. of the 1999 IEEE Int. Symp. on Intelligent Control/Intelligent Systems and Semiotics, 259 –265 (1999). http://dx.doi.org/10.1109/ISIC.1999.796665 Google Scholar

41. 

S. Commuri and F. L. Lewis, “CMAC neural networks for control of nonlinear dynamical systems: structure, stability, and passivity,” Automatics, 33 (4), 635 –641 (1997). http://dx.doi.org/10.1016/S0005-1098(96)00180-X Google Scholar

42. 

K. S. Hwang and C. S. Lin, “Smooth trajectory tracking of three-link robot: a self-organizing CMAC approach,” IEEE Trans. Syst., Man, Cybern. B, 28 (5), 680 –692 (1998). http://dx.doi.org/10.1109/3477.718518 Google Scholar

43. 

H. M. Lee, C. M. Chen and Y. F. Lu, “A self-organizing HCMAC neural-network classifier,” IEEE Trans. Neural Networks, 14 (1), 15 –27 (2003). http://dx.doi.org/10.1109/TNN.2002.806607 ITNNEP 1045-9227 Google Scholar

44. 

C. C. Jou, “A fuzzy cerebellar model articulation controller,” in Proc. IEEE Int. Conf. Fuzzy System, 1171 –1178 (1992). http://dx.doi.org/10.1109/FUZZY.1992.258722 Google Scholar

45. 

S. H. Lane and J. Militzer, “A comparison of five algorithm for the training of CMAC memories for learning control systems,” Automatica, 28 (5), 1027 –1035 (1992). http://dx.doi.org/10.1016/0005-1098(92)90158-C ATCAA9 0005-1098 Google Scholar

46. 

C. S. Lin and C. K. Li, “A new neural network structure composed of small CMACs,” in Proc. IEEE Conf. Neural Systems, 1777 –1783 (1996). http://dx.doi.org/10.1109/ICNN.1996.549170 Google Scholar

47. 

D. S. Reay, “CMAC and B-spline neural networks applied to switched reluctance motor torque estimation and control,” in The 29th Annual Conf. of the IEEE Industrial Electronics Society, 323 –328 (2003). http://dx.doi.org/10.1109/IECON.2003.1280001 Google Scholar

48. 

S. Chen and D. Zhangm, “Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure,” IEEE Trans. Syst., Man, Cybern. B, 34 (4), 1907 –1916 (2004). http://dx.doi.org/10.1109/TSMCB.2004.831165 Google Scholar

49. 

S. F. Su, Z. J. Lee and Y. P. Wang, “Robust and fast learning for fuzzy cerebellar model articulation controllers,” IEEE Trans. Syst., Man, Cybern. B, 36 (1), 203 –208 (2006). http://dx.doi.org/10.1109/TSMCB.2005.855570 Google Scholar

50. 

T. F. Wu, P. S. Tsai and L. S. Wang, “Adaptive fuzzy CMAC control for a class of nonlinear systems with smooth compensation,” IEE Proc. Control Theory Appl., 153 (6), 647 –657 (2006). http://dx.doi.org/10.1049/ip-cta:20050362 Google Scholar

51. 

Y. F. Peng and C. M. Lin, “Intelligent hybrid control for uncertain nonlinear systems using a recurrent cerebellar model articulation controller,” IEE Proc. Control Theory Appl., 151 (5), 589 –600 (2004). http://dx.doi.org/10.1049/ip-cta:20040903 Google Scholar

52. 

J. B. Theocharis, “A high-order recurrent neuro-fuzzy system with internal dynamics: application to the adaptive noise cancellation,” Fuzzy Sets Syst., 157 (4), 471 –500 (2006). http://dx.doi.org/10.1016/j.fss.2005.07.008 FSSYD8 0165-0114 Google Scholar

53. 

D. G. Stavrakoudis and J. B. Theocharis, “A recurrent fuzzy neural network for adaptive speech prediction,” in Proc. IEEE Int. Conf. on Systems, Man and Cybernetics, 2056 –2061 (2007). http://dx.doi.org/10.1109/ICSMC.2007.4414191 Google Scholar

54. 

H. Koschmieder, “Theorie der horizontalen sichtweite,” Beitrage zur Physik der Freien Atmosphare, Keim & Nemnich, Munich, Germany (1924). Google Scholar

55. 

E. J. McCartney, Optics of the Atmosphere: Scattering by Molecules and Particles, Wiley, New York, NY (1976). Google Scholar

56. 

X. Zhang and B. A. Wandell, “Color image fidelity metrics evaluated using image distortion maps,” Signal Process., 70 (3), 201 –214 (1998). http://dx.doi.org/10.1016/S0165-1684(98)00125-X SPRODR 0165-1684 Google Scholar

57. 

N. Hautiere et al., “Blind contrast restoration assessment by gradient ratioing at visible edges,” Image Anal. Stereol., 27 (2), 87 –95 (2008). http://dx.doi.org/10.5566/ias.v27.p87-95 Google Scholar

Biography

Jyun-Guo Wang received his MS degree in computer science and information engineering from Chaoyang University of Technology, Taichung, Taiwan, in 2007. He is currently a PhD candidate in the Institute of Computer and Communication Engineering, Department of Electrical Engineering in National Cheng Kung University. His research interests are in the areas of neural networks, fuzzy systems, and image processing.

Shen-Chuan Tai received his BS and MS degrees in electrical engineering from the National Taiwan University, Taipei, Taiwan, in 1982 and 1986, respectively, and his PhD in computer science from the National Tsing Hua University, Hsinchu, Taiwan, in 1989. He is currently a professor in the Department of Electrical Engineering, National Cheng Kung University, Tainan, Taiwan. His teaching and research interests include data compression, DSP, VLSI array processors, computerized electrocardiogram processing, and multimedia systems.

Cheng-Jian Lin received the PhD in electrical and control engineering from the National Chiao-Tung University, Hsinchu, Taiwan, in 1996. Currently, he is a distinguished professor in the Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taichung, Taiwan. His current research interests include soft computing, pattern recognition, intelligent control, image processing, bioinformatics, and Android/iPhone program design.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Jyun-Guo Wang, Shen-Chuan Tai, and Cheng-Jian Lin "Transmission map estimation of weather-degraded images using a hybrid of recurrent fuzzy cerebellar model articulation controller and weighted strategy," Optical Engineering 55(8), 083104 (12 August 2016). https://doi.org/10.1117/1.OE.55.8.083104
Published: 12 August 2016
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Fuzzy logic

Image analysis

Atmospheric particles

Air contamination

Visibility

Atmospheric modeling

Optical engineering

Back to Top