Open Access
13 March 2021 Review of passive polarimetric dehazing methods
Wenfei Zhang, Jian Liang, Guomei Wang, Huanian Zhang, Shenggui Fu
Author Affiliations +
Abstract

The images acquired in haze conditions are significantly degraded due to the presence of atmospheric particles. These images have low contrast, poor visibility, and some information is lost as well. These characteristics severely hinder the further processing and the applications in which the images are being used. The polarimetric dehazing methods are effective for direct imaging and enhancing the imaging quality. In addition, polarimetric dehazing methods have the capability to cope with haze and other turbid mediums. Therefore, the polarimetric dehazing methods are extensively developed and used in different applications due to their superior performance. We present in detail the principles, the implementation techniques, and the advancements in the polarimetric dehazing methods. We believe that this work is the first in-depth review on the passive polarimetric dehazing methods.

1.

Introduction

Haze is a common atmospheric phenomenon, especially in industrial areas. The haze not only poses a threat to human health but also degrades the performance of many technological systems, such as optical monitor systems and remote sensing applications. Similarly, the images acquired in such conditions are seriously degraded and have low contrast, which leads to information loss, thus resulting in the poor performance of outdoor optical devices.1,2 Therefore, it is critical to develop such techniques and methods, which are able to enhance the outdoor imaging quality acquired for various applications. There are various methods proposed in recent literature for enhancing the quality of images acquired in hazy environment. In general, according to the number of input images, the image dehazing methods can be classified into single image dehazing methods and multiple images. The classification of the dehazing methods is shown in Fig. 1.

Fig. 1

The classification of image dehazing methods.

OE_60_3_030901_f001.png

The single image dehazing methods only need single input hazy image. Among this kind of dehazing methods, two categories can further be divided considering whether a physical model is used or not. The first is the image enhancement methods without any model,319 and the other is image restoration methods based on a physical imaging degradation model in turbid media.2033 The image enhancement methods aim at highlighting the targets of interest and improving the contrast regardless of the cause of the image’s quality degradation, such as histogram equalization,36 retinex,79 wavelet transform,1012 homomorphic filtering,13 and so on. In addition, these techniques are based on simple algorithms and are highly efficient, thus making them suitable for many applications. However, the dehazing capacity of image enhancement methods is intrinsically limited, and these methods are only suitable for thin and homogeneous haze conditions. Also, the image restoration methods are based on specific physical imaging degradation models. These methods estimate the elements that play a role in degrading the image quality, including atmospheric light and airlight. The dehazed image can be obtained by inversely solving the physical model. The image restoration methods outperform the image enhancement methods in terms of dehazing capacity.34 However, the image restoration methods are difficult to derive due to the presence of many parameters, some assumptions or prior knowledge are required. It is notable that the image restoration methods are computationally complex as compared to image enhancement algorithms and are used in fewer applications, especially in real-time applications. Fattal,20 Tarel,21 Tan,22 dark channel prior (DCP)-based,2325 Meng,26 Bayesian dehazing,27,28 Lu,29 and deep learning-based32,33 are the most commonly discussed image restoration methods. The deep learning-based dehazing methods are intensely developed in last 5 years. Its main idea is to utilize convolutional neural network to learn and handle hazy image, finally estimate parameters based on traditional physical imaging degradation model to obtain dehazed image. The deep learning-based dehazing methods usually need plentiful clear and synthetized hazy image to train, which is a complex task.

The multiple images dehazing methods require more than one input image of the same scene. Until now, there are three kinds of multiple images dehazing methods: images obtained under different visibility,2,3538 images obtained with visible and near-infrared camera,39,40 and images obtained with different polarizations.4166 The depth discontinuities and the scene structure can be estimated through the changes of intensities of the images under different visibility with the same scene. Then, the image contrast can be enhanced via the estimated scene structure. This method is only suitable for static scenes. It is difficult to simultaneously acquire images with the same scenes under different visibility for dynamic scenes. The visible and near-infrared fusion dehazing methods rely on the fact that the near-infrared light propagates to a greater distance as compared to visible light, due to low scattering in turbid media. We can obtain an image of good quality by combining the color information of visible image and the high visibility of the near-infrared image. However, the major obstacle in accomplishing this is the simultaneous acquisition of the visible image and the near-infrared image. Furthermore, the efficiency of the fusion algorithm is another challenge in combining these images. The polarization-based dehazing methods commonly known as polarimetric dehazing methods are designed on the basis of the fact that the airlight is partially polarized. So, the airlight radiance is estimated using multiple polarization images of the same scene for obtaining the dehazed image. The polarimetric dehazing methods show high information restoration capacity along with computational efficiency. With the assistance of polarimeter, the polarimetric dehazing methods have been widely developed for achieving good results. In this work, we present what we believe is the first in-depth review of the passive polarimetric dehazing methods. The major focus of this work is on the basic principles, implementation techniques, and the progress in the development of these methods. Thus, the subjective evaluation of dehazed image quality is not included in this work, which is another wide research topic and is out of scope of this paper.

2.

Imaging Model in Haze

Imaging is a process in which the detector records the intensity and wavelength information of light emitted from a source or reflected from the target surfaces. In clear conditions, the emitted or reflected light from target surfaces is directly exposed on the detector without any scattering or attenuation. Consequently, the resultant image contains the detailed information of the target objects. However, in haze, due to the existence of haze particles, the imaging process is degraded due to the presence of particles that scatter the light. The well-known imaging degradation model used in the dehazing filed is shown in Fig. 2.1,2

Fig. 2

The schematic of the imaging degradation model in haze.1,2 Direct transmission is the object light after attenuation through haze. Airlight is the scattered atmospheric light.1

OE_60_3_030901_f002.png

According to the imaging degradation model, the image formation is described as

Eq. (1)

I(x,y)=D(x,y)+A(x,y),
where I denotes the total radiance reaching the detector, and it is the sum of the direct transmission D and the airlight A. (x,y) denotes the pixel coordinates. The direct transmission D is the attenuated object light L:

Eq. (2)

D(x,y)=L(x,y)+t(x,y),
where t represents the transmittance of the atmosphere associated with the distance between the scene and the camera, i.e., the amount of object light that reaches the detector. Note that the relationship between the transmittance and the wavelength is not considered. The object light L is the desired result in dehazing. The airlight is the scattered atmospheric light:

Eq. (3)

A(x,y)=A[1t(x,y)],
where A represents the airlight radiance corresponding to an object at an infinite distance. Note that it is a global constant.

Due to the scattering and attenuation, the light from distant target objects is unable to reach the detector, resulting in poor visibility. Meanwhile, the airlight blends into the direct transmission and dominates the target signal, leading to low contrast. Therefore, the object light L, i.e., the dehazed image is obtained as far as the airlight radiance is estimated and eliminated accurately, and the attenuation of the object light is compensated. By combining Eqs. (1)–(3), we obtain the final dehazed image L as

Eq. (4)

L(x,y)=I(x,y)A(x,y)1A(x,y)/A.

Since A is a global constant, it can be estimated easily. The key difference among different dehazing methods is the way they estimate the airlight radiance. The airlight that the atmospheric particles scatter is partially linearly polarized, which is determined by Mie scattering theory.42 So, the polarimetric dehazing method estimates the airlight radiance using multiple polarization images and focuses on improving the estimation accuracy of the airlight to achieve efficient dehazing results.

3.

Passive Polarimetric Dehazing Methods

3.1.

Polarimetric Dehazing Methods Based on Polarized-Difference Imaging

The polarimetric dehazing method was first proposed by Schechner et al.42 The authors further reported the details and experiments of this proposed method in 2003.43 In this method, the authors design three approximations that are widely used in later methods. First, the authors only consider the degradation caused due to the attenuation of signal and the additive airlight. Second, the authors only regard the scattering as single-scattering effect. Finally, the object light is assumed to be unpolarized. On the basis of the aforementioned approximations, the method proposed in Ref. 42 is designed on the basis of polarized-difference (PD) imaging, i.e., two images with orthometric polarization. When a polarizer is mounted in front of a camera, the total radiance that the camera receives fluctuates with different orientation of the polarizer, resulting from the polarized airlight. This is shown in Fig. 3. The images with the maximum and the minimum radiance correspond to the “worst state” and “best state” and are denoted as I and I, respectively. The typical “worst state” and “best state” images are shown in Fig. 4. It is evident from the figure that the difference between the two images is very clear. Therefore, there is a need to devise some special methods for the determination of positions, such as subjective judgment,42,43 electrically switchable polarizer based on a liquid crystal device,46 and Stokes vector deduction.49,67,68

Fig. 3

The relationship between the image radiance and the rotational angle of the polarizer. The measured minimum I and maximum I intensity are function of α. The difference between and is due to the difference between the airlight components A, A. It is determined to the unknown airlight intensity by the parameter PA. The total intensity Itotal is composed of the airlight intensity and the direct transmission.42

OE_60_3_030901_f003.png

Fig. 4

Images of the polarization components corresponding to the minimal and the maximal radiances. The “best state” image has the best image contrast and the “worst state” image has the worst image contrast.43

OE_60_3_030901_f004.png

We can derive the parameters from the two polarized images. The degree of polarization (DoP) of the airlight is defined as pA and can be obtained as

Eq. (5)

PA=AAA+A=I(sky)I(sky)I(sky)+I(sky),
where I(sky) represents the sky area of the image without objects. In this area I=A. Thus, the airlight radiance for the whole image is expressed as

Eq. (6)

A=A+A=AAPA=IIPA.

The global constant A is obtained by the pixels in area representing the sky as

Eq. (7)

A=12[I(sky)+I(sky)].

After obtaining the two essential parameters, we obtain the dehazed image as shown in Fig. 5. Meanwhile, the method renders a byproduct, the range map, which indicates the distance ordering of the objects in the scene, with the assumption that the extinction coefficient is distance invariant. The range map is defined as

Eq. (8)

βz=ln[1A/A].

Fig. 5

Dehazing results for Fig. 4. The dehazing image shows much better contrast and color than the hazy images.43

OE_60_3_030901_f005.png

The range map of Fig. 4 is shown in Fig. 6. It is notable that this map is qualitatively consistent with the scene.

Fig. 6

Range map of the scene for Fig. 4. The farther the object, the darker the shading.43

OE_60_3_030901_f006.png

The polarimetric dehazing methods achieve efficient results. However, there are some technical details that still require completion, such as the selection of sky area and the processing for the specular objects. Namer et al. concluded that the sky area close to horizon is more reliable. The authors proposed an automatic sky detector.46 The sky area detected by the proposed method is shown in Fig. 7(a). Figure 6(b) presents the dehazed image. The cases in which the images do not contain the sky in the field of view are discussed in detail.47,69,70 Furthermore, the cases where the third approximation is not satisfied are also deeply analyzed. The dielectric objects, such as water bodies and shiny construction materials, reflect the light toward the camera that is significantly polarized. Such pixels are overcompensated to strange colored pixels or dark pixels. Namer et al. consider that the adjacent objects should show similar airlight value. Once the algorithm automatically detects the areas in the airlight image that are very different from their surroundings, the airlight is automatically re-estimated by simple interpolation of the airlight of the surroundings. Figure 8(a) shows the specular objects in hazy image. The basic method produced black spots in these areas as shown in Fig. 8(b). However, the improved method successfully recovered the colors in these areas as shown in Fig. 8(c).46 Fang et al. proposed a decorrelation-based scheme to estimate the DoP for the target objects. Figure 9 shows the results before and after the consideration of the polarization effect of the objects.57 Huang et al. proposed a method to estimate the PD image of the signal and search for the best dehazing result in terms of quality. The improved dehazing results are shown in Fig. 10.71 In addition, there are various works that discuss the effectiveness of the polarimetric dehazing methods for underwater image quality enhancement.55,56,7278 As shown in Fig. 3, it is found that the two polarized images are similar and difficult to distinguish. Thus, the main difficulty for the PD-based method is the acquisition of the “best state” and “worst state” images.

Fig. 7

(a) The best polarized image I of a hazy scene. The automatic detected sky is marked by a white line; (b) result of dehazing for scene (a), relying on the automatically selected sky line.46

OE_60_3_030901_f007.png

Fig. 8

Corrected dehazing result when specular objects exist in the field of view. (a) The best polarized hazy image with specular objects; (b) dehazing result without re-estimate the airlight. The color with specular objects area has distorted to black; (c) dehazing result with re-estimate the airlight. The colors are well recovered compared to (b).46

OE_60_3_030901_f008.png

Fig. 9

(a) Dehazing results using Schechner’s method in Ref. 42. (b) Dehazing result with consideration of the polarization effect of the objects using the proposed method in Ref. 57.

OE_60_3_030901_f009.png

Fig. 10

The dehazing results (a) before and (b) after the consideration of the polarization effect of the objects in underwater.71

OE_60_3_030901_f010.png

3.2.

Polarimetric Dehazing Methods Based on Stokes Vector

The PD imaging-based polarimetric dehazing methods easily obtain the DoP of the airlight and achieve effective dehazing results. However, the Stokes vector-based polarimetric dehazing method further obtains the angle of polarization (AoP) of the airlight. This allows to further improve the estimation accuracy of the airlight. So, the Stokes vector-based polarimetric dehazing method has attracted considerable attention.50,51,5860,79 The Stokes vector comprises four parameters. It represents the polarization property of light on the basis of the intensity information, which makes the representation and detection of polarized light much easier.80 In the dehazing field, we generally consider that the airlight is related to the linear polarization effect, i.e., only the first three parameters. In order to obtain the linear Stokes vector, three (0 deg, 60 deg, and 120 deg)79 or four (0 deg, 45 deg, 90 deg, and 135 deg)50,51 images with different polarization orientations are required. Liang et al. adopted the four images captured by the polarizer at orientations of 0 deg, 45 deg, 90 deg, and 135 deg, for extracting the DoP and AoP of the airlight.50,51 The intensities of the four images are I(0), I(45), I(90), and I(135). This is shown in Fig. 11. The linear Stokes vector is

Eq. (9)

{S0=I(0)+I(90)S1=I(0)I(90)S2=I(45)I(135),
where S0 denotes total radiance, i.e., I, S1 denotes the intensity difference between the vertical and horizontal polarized components, and S2 denotes the intensity difference between the 45 deg and 135 deg polarized components with respect to x axis.

Fig. 11

The hazy images with the polarizer at the orientations of (a) 0 deg, (b) 45, (c) 90 deg, and (d) 135 deg, respectively.51

OE_60_3_030901_f011.png

According to the definition of the Stokes vector, the DoP and AoP are obtained as

Eq. (10)

p=S12+S22S0θ=12arctan[S2S1].

Based on the discussion in Sec. 3.1, it is evident that S0 includes the direct transmission, but S1 and S2 so not include it. Thus, the DoP is influenced by the direct transmission, whereas the AoP is not affected by it. Consequently, the estimation of the airlight is more accurate using AoP as compared to DoP. The AoP that exists in the highest frequency in the whole image is defined as the AoP of the airlight (θA). Now, the DoP of the airlight (pA) is defined as the maximum value among the pixels that satisfies θA. For simplicity, the directions of 0 deg and 90 deg are defined as x and y axes, respectively. Thus, θA represents the angle between the polarization orientation of the airlight and x axis. Ap denotes the polarized radiance of the airlight. In this case, the polarized radiance of the airlight in x and y directions is expressed as Apx=Ap·cos2θA and Apy=Ap·sin2θA. Considering the fact that Apx and Apy are also mathematically expressed as

Eq. (11)

Apx=I0S0[1p]/2Apy=I90S0[1p]/2,
we establish the following relation:

Eq. (12)

Ap=I0S0[1p]/2cos2θA=I90S0[1p]/2sin2θA.

Now, A is easily obtained by using A=Ap/pA.

In order to cater the sky area, the authors proposed a new method for the estimation of A. The proposed method is effective and accurate for almost all conditions. The final dehazing result is shown in Fig. 12.

Fig. 12

Dehazing result dealt with the proposed method in Ref. 51.

OE_60_3_030901_f012.png

Liang et al. also proposed a scheme for improving the performance of the polarimetric dehazing methods in dense haze condition.81 It is inferred that the quantization error of the camera is a major problem in dense haze conditions. As a result, the noise is also significantly amplified. In order to eliminate the influence of quantization noise, the local average filter is employed in a small patch. Figure 13 shows the AoP distribution before and after the optimization. It is evident that the results are more accurate after optimization. Figure 14 shows the dehazing results using the optimized method and the basic method.51 The dehazing capacity of the optimized polarimetric dehazing methods is effectively improved. In Ref. 82, the authors compared the experimental performances of the polarimetric dehazing method based on three random angles and two orthogonal angles. The experimental results show that the three random angles-based method outperforms the two orthogonal angles-based method. Moreover, the former method does not require accurate angle, thus making this method more feasible for other applications.

Fig. 13

(a) The AoP value of the incident light before optimization. (b) The AoP value of the incident light after smoothing each pixel value of the four raw images.81

OE_60_3_030901_f013.png

Fig. 14

Dehazing results using (a) the optimized polarimetric dehazing method in Ref. 81 and (b) the basic polarimetric dehzing method in Ref. 51.

OE_60_3_030901_f014.png

Fig. 15

Dehazing results. (a) Hazy image; (b) dehazing results with the proposed method; (c) magnified region on the green rectangle A in (a); (d) magnified region on the green rectangle B in (a); (e) magnified region on the green rectangle A in (b); and (f) magnified region on the green rectangle B in (b).68

OE_60_3_030901_f015.png

3.3.

Polarimetric Dehazing Methods Incorporating Digital Image Processing

Image enhancement is a basic and effective technique in computer vision. Various image enhancement algorithms can be designed into the polarimetric dehazing method to improve the dehazing capacity. Liu et al. considered that in hazy images, the objects and haze differ in spatial frequency distribution. The low and high spatial frequency components reflect the effects of haze and objects, respectively. In the method proposed by Liu et al., the hazy image is decomposed into different spatial frequency layers using the wavelet transform. First, the low spatial frequency components are processed with the polarimetric dehazing method. Then, the high spatial frequency components are manipulated with a nonlinear transform.64,83 The corresponding dehazing results are shown in Fig. 15. There are other works presented in literature that have made efforts in making a synergy of digital image processing algorithms and polarimetric dehazing methods.8487 The main obstacle for this method is the algorithm complexity. More effort should be made for real-time application.

3.4.

Polarimetric Dehazing Methods Based on Visible and Infrared Image Fusion

In general, the haze (including mist) particle size is almost <1  μm, where the light propagation obeys Mie scattering theory. The near-infrared light propagates to a greater distance as compared to visible light due to low scattering. An image of good quality is obtained by combining the color information of visible image and the high visibility of the near-infrared image. However, the quality of the near-infrared hazy images can also be improved after the processing the image using the polarimetric dehazing method.49 Liang et al. proposed the polarimetric dehazing method that fuses the visible dehazed image and near-infrared dehazed image to advance the visibility.88 This dehazing process for a color hazy image consists of two steps: (1) applying the basic polarimetric dehazing method to visible and near-infrared hazy images separately; (2) combining the visible and near-infrared dehazed images to obtain the final dehazed image using a fusion algorithm. Figure 16 shows the original groups of visible and near-infrared hazy images. As presented in the figure, the near-infrared image inherently contains more information and better visibility as compared to the normal image. This is consistent with the discussions presented in Refs. 39 and 40.

Fig. 16

Original groups of (a) visible hazy images and (b) near-infrared hazy images.88

OE_60_3_030901_f016.png

The visible and near-infrared dehazed images are processed by the basic polarimetric dehazing method51 as presented in Fig. 17. It is evident that the contrast of the visible and near-infrared dehazed images is enhanced significantly as compared to the images presented in Figs. 16(a) and 16(b). In addition, the resultant image obtained by fusing the visible and near-infrared dehazed images are enhanced and the color information is also restored. This is shown in Fig. 18(b). On the other hand, the resultant image of directly fused visible and near-infrared hazy images is shown in Fig. 18(a), which can be roughly regarded as the visible and near-infrared fusion method.50,51 The process of directly fusing the hazy images together is unable to enhance the quality of hazy images in dense haze conditions. The experimental results demonstrate that the visibility of the final dehazed image is improved at least 100%. Two shortcomings of the proposed method limit its further application in dehazing field. On the one hand, the simultaneous acquisition of the visible and near-infrared images with the same scenes. It usually needs additional image registration to make them consistent. On the other hand, the computational complexity requires several hours for processing an image of 100 megapixels.

Fig. 17

Dehazed images of Fig. 16 by polarimetric dehazing method. (a) Dehazed visible image with color; (b) dehazed near-infrared image.88

OE_60_3_030901_f017.png

Fig. 18

(a) Fused image of visible and near-infrared hazy image. (b) Fused image of visible and near-infrared dehazed image, i.e., the final dehazing result.88

OE_60_3_030901_f018.png

3.5.

Fast Polarimetric Dehazing Method

Many practical applications, such as the traffic monitoring and navigation system, require real-time processing. In addition to the image acquisition systems, the efficiency of the polarimetric dehazing method is of significance for the real-time image dehazing. Zhang et al. proposed a fast polarimetric dehazing method in HSI color space and for color correction.89 In HSI color space, the intensity channel is only related to the RGB intensities.90 So, the polarimetric dehazing process is only implemented once in the intensity channel. The color distortions, which result from different scattering coefficients, are dependent on the wavelengths and are corrected by the white patch retinex method. The overall flowchart of the proposed method is shown in Fig. 19. The experiments indicate that the quality of the dehazed image obtained using the proposed method is similar to that of the polarimetric dehazing method in RGB color space. Table 1 shows the execution time of different methods. The proposed method outperforms Tarel in terms of efficiency, which is developed as a fast-dehazing method and shows the advantage in computational efficiency.89

Fig. 19

The overall flowchart of the proposed method.89

OE_60_3_030901_f019.png

Table 1

The consuming time of the different methods.89

Image size (h*w)MengMSRCRTarelHERGB PDMWork in Ref. 89
727*115010.78 s7.77 s35.48 s1.24 s86.27 s30.99 s
950*130011.52 s11.02 s68.85 s1.27 s140.31 s52.91 s
970*130011.57 s11.53 s69.71 s1.36 s142.53 s52.96 s
690*11808.29 s6.90 s38.98 s1.11 s88.12 s31.27 s

3.6.

Real-Time Polarimetric Dehazing Method Based on Imaging Polarimeter

In order to perform the true real-time image haze removal, the real-time image acquisition and real-time processing should be guaranteed. In case of polarimetric dehazing method, the simultaneous acquisition of polarization images relies on the imaging polarimeter. The imaging polarimeter is a device that can obtain four different linear polarization images simultaneously (linear-Stokes polarimeter) or three linear polarization images and one circular polarization image (full-Stokes polarimeter) with optimal designed structures.91 The polarimeters are usually based on different technological aspects, such as division of time,9295 division of amplitude,96 division of aperture,60,97 division of focal plane,98100 and Fourier-based.101104 The imaging polarimeter has been thoroughly reviewed in Ref. 91, thereby we only focus on the development on polarimeters related to image dehazing. Mudge et al. presented the real-time dehazing results based on a division of amplitude near-infrared polarimeter.49,96 The optical construction is shown in Fig. 20(a). The four channels focus on one sensor with the resolution of 640*512. The prototype is shown in Fig. 20(b). Figure 21(a) shows the hazy image provided by the polarimeter, and the quality of the dehazed image is shown in Fig. 21(b). The resultant figure shows that the quality of the image is improved. However, the prototype is controlled by a laptop computer using LabVIEW, and MATLAB is used to process and display the polarimetric dehazing images, which is not real-time dehazing in true sense.

Fig. 20

(a) The optical construction and (b) the prototype of the division of amplitude near-infrared polarimeter.73

OE_60_3_030901_f020.png

Fig. 21

(a) The hazy image provided by the polarimeter; (b) dehazed image automatically dehazed by the polarimeter.49

OE_60_3_030901_f021.png

Zhang et al. proposed a true real-time polarimetric dehazing visible polarimeter.60 The polarimeter is based on the division of aperture structure and four optical channels, which measure the full-Stokes parameters and focus on one sensor with resolution of 2048*2048. Figure 22(a) shows the polarization-state distribution on the sensor, and the photo of the prototype is shown in Fig. 22(b). Theoretically, the four subimages are uniform with resolution of 1024*1024. However, due to the misregistration in four optical channels, shown in Fig. 23, the polarimeter must be precisely calibrated to mitigate the mismatch of angles of the polarizers and the three linear polarization images in intensity. Figure 24 shows the final dehazed image provided by the polarimeter. It is notable that the quality of the image is significantly improved, and the color information is recovered as well. The dehazing algorithm is loaded into the FPGA modules assembled in the polarimeter that processes the images automatically at a rate of 25 fps. It only needs power supply to operate and a monitor to display the dehazed image.

Fig. 22

(a) The distribution of the polarization-state on the sensor. (b) The photo of the full Stokes polarimetric camera.60

OE_60_3_030901_f022.png

Fig. 23

The original image obtained on single sensor directly without any additional processing. It contains four polarized images, including three linearly, and one circularly polarized images.60

OE_60_3_030901_f023.png

Fig. 24

The final dehazed image dealt with the polarimeter.60

OE_60_3_030901_f024.png

4.

Comparison Study for Various Dehazing Methods

4.1.

Experimental Results

In this section, we perform some experiments to compare the dehazing capacity of the polarimetric dehazing methods and classic single image dehazing methods. The single image dehazing methods include He’s dark channel prior (DCP),23 Meng’s method (Meng),26 Rahman’s multiscale retinex for color restoration (MSRCR),8 Tarel’s method (Tarel),21 and Cai’s deep learning dehazing method (DehazeNet).33 The polarimetric dehazing methods utilize the method in RGB color space (RGB PDM)59 and in HSI color space (HSI PDM).89 Other reviews of image dehazing methods only compare the performance of some classic single image dehazing methods,34,105 this is because they are difficult to obtain the polarized images and the results handle by polarimetric dehazing methods. In our experiments, the original hazy images are captured and the dehazed results are handled by ourselves for all methods. The image dehazing results are shown in Figs. 25Fig. 2627 for scenes 1 to 3, respectively.

Fig. 25

Comparisons of some classic dehazing methods with scene 1. (a) Hazy image; (b) DCP; (c) Meng; (d) MSRCR; (e) Tarel; (f) DehazeNet; (g) RGB PDM; and (h) HSI PDM.

OE_60_3_030901_f025.png

Fig. 26

Comparisons of some classic dehazing methods with scene 2. (a) Hazy image, (b) DCP, (c) Meng, (d) MSRCR, (e) Tarel, (f) DehazeNet, (g) RGB PDM, and (h) HSI PDM.

OE_60_3_030901_f026.png

Fig. 27

Comparisons of some classic dehazing methods with scene 3. (a) Hazy image, (b) DCP, (c) Meng, (d) MSRCR, (e) Tarel, (f) DehazeNet, (g) RGB PDM, and (h) HSI PDM.

OE_60_3_030901_f027.png

4.2.

Objective Evaluation

From subjective evaluation, it is obvious that all the image qualities are all improved after dehazing. The visibility of dehazed results handled by polarimetric dehazing methods is better than that of single image dehazing methods. Tarel’s method gives rise to color distortion. The Meng’s method may obtain the best visibility among the single image dehazing methods.

Objective evaluation should be employed to objectively assess the image quality after dehazing. The quality evaluation of dehazed image is another intense research field, especially for the situation that the ground-truth haze-free image is not available, i.e., the no-reference image quality assessment (NR IQA). Some NR IQAs have been proposed for objective evaluation, such as the first two indicators (e,r¯) of the blind assessment,106 image visibility measurement (IVM),107 image contrast,108 visual contrast (VCM),109 natural image quality evaluator (NIQE),110 image structure similarity (SSIM), and universal quality index (UQI).111 However, few assessments can assess the overall dehazing quality. These eight objective quality evaluation indexes were used to compare above dehazing methods, and the comparison results are shown in Tables 2Table 34 corresponding to Figs. 25Fig. 2627. The comparison results show that the polarimetric dehazing methods outperform the single image dehazing methods in most of the evaluation indexes, although these evaluation indexes assess the image through different characteristics. This may be the intrinsic advantage, because more input images contain much more information about the scenes.

Table 2

The objective image quality comparison of dehazing results of Fig. 25 with different evaluation indexes.

Quality evaluationDCPMengMSRCRTarelDehazeNetRGB PDMHSI PDM
e15.184336.67532.40498.36915.993738.085132.1501
r¯1.10972.83162.61092.77211.21863.45953.142
IVM4.409710.20151.46752.83392.20719.69828.4591
Contrast gain0.08850.30240.04450.12160.05150.30730.2648
VCM39.376225.73128.070728.265139.571237.426932.3587
NIQEa8.04896.88635.73736.45827.27324.96344.0948
SSIMa0.83260.77020.7760.82120.96540.71770.716
UQIa0.71330.7480.71150.94340.93510.80810.6907

aHigher value represents a lower quality.

Table 3

The objective image quality comparison of dehazing results of Fig. 26 with different evaluation indexes.

Quality evaluationDCPMengMSRCRTarelDehazeNetRGB PDMHSI PDM
e10.861526.08310.24660.80989.39127.147326.1381
r¯0.84592.33142.21045.80161.23422.4922.4632
IVM4.4388.99441.93636.14084.071910.755510.6554
Contrast gain0.08920.2370.02710.28540.15930.31660.3096
VCM34.615423.886653.23893049.595137.044533.1984
NIQEa7.34875.67635.43895.08397.04174.79364.1832
SSIMa0.76840.78760.81210.91120.72870.7181
UQIa0.71190.74950.73470.83320.73020.7092

aHigher value represents a lower quality.

Table 4

The objective image quality comparison of dehazing results of Fig. 27 with different evaluation indexes.

Quality evaluationDCPMengMSRCRTarelDehazeNetRGB PDMHSI PDM
e12.420738.35889.596935.06447.821239.04139.3792
r¯1.34192.71611.91692.911.34822.88122.9927
IVM3.417911.07022.9098.24512.36484.67484.7851
Contrast gain0.09870.18780.13190.19030.06110.18610.1951
VCM26.95128034.634136.097625.365934.756150.122
NIQEa6.91465.65185.38355.37696.13113.23952.7853
SSIMa0.87930.77880.88440.79350.95040.72480.6989
UQIa0.83660.72050.94060.91590.94330.73430.7166

aHigher value represents a lower quality.

5.

Conclusions

In this work, we present the image degradation model caused due to haze. This model is widely used in the computer vision and dehazing applications. The polarimetric dehazing methods are based on this degradation model. These methods make an effort to restore the object’s light and enhance the image visibility by restoring the information. The basic principle of the polarimetric dehazing method is to estimate the airlight radiance with multiple polarization images, which shows that the partially linearly polarized property is determined by Mie scattering theory. We present a complete review of the techniques, the advancements, the implementation methods, and the algorithms of the polarimetric dehazing methods. Experimental results further verified the dehazing capacity of the polarimetric dehazing methods. We believe that this review can significantly assist the overall understanding of the polarimetric dehazing methods.

Acknowledgments

The authors thank Shandong Provincial Natural Science Foundation, China (ZR2020QA066, ZR2020MA087) and the National Natural Science Foundation of China (NSFC) (11904213, 11704226).

References

1. 

R. C. Henry et al., “Colour perception through atmospheric haze,” J. Opt. Soc. Am. A, 17 (5), 831 –835 (2000). https://doi.org/10.1364/JOSAA.17.000831 JOAOD6 0740-3232 Google Scholar

2. 

S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vision, 48 (3), 233 –254 (2002). https://doi.org/10.1023/A:1016328200723 IJCVEQ 0920-5691 Google Scholar

3. 

T. K. Kim, J. K. Paik and B. S. Kang, “Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering,” IEEE Trans. Consum. Electron., 44 (1), 82 –87 (1998). https://doi.org/10.1109/30.663733 ITCEDA 0098-3063 Google Scholar

4. 

J. Y. Kim, L. S. Kim and S. H. Hwang, “An advanced contrast enhancement using partially overlapped sub-block histogram equalization,” IEEE Trans. Circuits Syst. Video Technol., 11 (4), 475 –484 (2001). https://doi.org/10.1109/76.915354 Google Scholar

5. 

M. Kim and M. Chung, “Recursively separated and weighted histogram equalization for brightness preservation and contrast enhancement,” IEEE Trans. Consum. Electron., 54 (3), 1389 –1397 (2008). https://doi.org/10.1109/TCE.2008.4637632 ITCEDA 0098-3063 Google Scholar

6. 

L. J. Wang and R. Zhu, “Image defogging algorithm of single color image based on wavelet transform and histogram equalization,” Appl. Math. Sci., 7 (79), 3913 –3921 (2013). https://doi.org/10.12988/ams.2013.34206 Google Scholar

7. 

D. J. Jobson, Z. Rahman and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process., 6 (7), 965 –976 (1996). https://doi.org/10.1109/83.597272 IIPRE4 1057-7149 Google Scholar

8. 

D. J. Jobson, Z. Rahman and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Trans. Image Process., 6 (3), 451 –462 (1997). https://doi.org/10.1109/83.557356 IIPRE4 1057-7149 Google Scholar

9. 

Z. Rahman, D. J. Jobson and G. A. Woodell, “Retinex processing for automatic image enhancement,” Proc. SPIE, 4662 100 –110 (2004). https://doi.org/10.1117/12.469537 JEIME5 1017-9909 Google Scholar

10. 

C. Busch and E. Debes, “Wavelet transform for analyzing fog visibility,” IEEE Intell. Syst. Appl., 13 (6), 66 –71 (1998). https://doi.org/10.1109/5254.736004 Google Scholar

11. 

S. Dippel et al., “Multiscale contrast enhancement for radiographies: Laplacian pyramid versus fast wavelet transform,” IEEE Trans. Med. Imaging, 21 (4), 343 –353 (2002). https://doi.org/10.1109/TMI.2002.1000258 ITMID4 0278-0062 Google Scholar

12. 

R. Zhu and L. J. Wang, “Improved wavelet transform algorithm for single image dehazing,” Optik, 125 (13), 3064 –3066 (2014). https://doi.org/10.1016/j.ijleo.2013.12.077 OTIKAJ 0030-4026 Google Scholar

13. 

M. J. Seow and V. K. Asari, “Ratio rule and homomorphic filter for enhancement of digital colour image,” Neurocomputing, 69 (7/9), 954 –958 (2006). https://doi.org/10.1016/j.neucom.2005.07.003 NRCGEO 0925-2312 Google Scholar

14. 

P. Carr and R. Hartley, “Improved single image dehazing using geometry,” in Proc. IEEE Conf. Digital Image Comput.: Tech. and Appl., 103 –110 (2009). https://doi.org/10.1109/DICTA.2009.25 Google Scholar

15. 

C. O. Ancuti, C. Ancuti and P. Bekaert, “Effective single image dehazing by fusion,” in Proc. IEEE Conf. Image Process., 3541 –3544 (2010). https://doi.org/10.1109/ICIP.2010.5651263 Google Scholar

16. 

P. U. Naik and S. Borkar, “Image dehazing using PCA fusion technique for enhanced road visibility,” Int. J. Comput. Appl., 180 46 (2018). https://doi.org/10.5120/ijca2018917204 Google Scholar

17. 

Y. Wang et al., “Haze removal algorithm based on single-images with chromatic properties,” Signal Process. Image Commun., 72 80 –91 (2019). https://doi.org/10.1016/j.image.2018.12.010 SPICEF 0923-5965 Google Scholar

18. 

R. Luzon-Gonzalez, J. L. Nieves and J. Romero, “Recovering of weather degraded images based on RGB response ratio constancy,” Appl. Opt., 54 (4), B222 –B231 (2015). https://doi.org/10.1364/AO.54.00B222 APOPAI 0003-6935 Google Scholar

19. 

S. Salazar-Colores et al., “A fast image dehazing algorithm using morphological reconstruction,” IEEE Trans. Image Process., 28 (5), 2357 –2366 (2019). https://doi.org/10.1109/TIP.2018.2885490 IIPRE4 1057-7149 Google Scholar

20. 

R. Fattal, “Single image dehazing,” ACM Trans. Graphics, 27 (3), 1 –9 (2008). https://doi.org/10.1145/1360612.1360671 ATGRDF 0730-0301 Google Scholar

21. 

J. Tarel and N. Hautiere, “Fast visibility restoration from a single colour or gray level image,” in Proc. IEEE Conf. Comput. Vision, 2201 –2208 (2009). https://doi.org/10.1109/ICCV.2009.5459251 Google Scholar

22. 

R. T. Tan, “Visibility in bad weather from a single image,” in Proc. Conf. Comput. Vision and Pattern Recognit., 1 –7 (2008). https://doi.org/10.1109/CVPR.2008.4587643 Google Scholar

23. 

K. M. He, J. Sun and X. O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., 33 (12), 2341 –2353 (2011). https://doi.org/10.1109/TPAMI.2010.168 ITPIDJ 0162-8828 Google Scholar

24. 

K. M. He, J. Sun and X. O. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell., 35 (6), 1397 –1409 (2013). https://doi.org/10.1109/TPAMI.2012.213 ITPIDJ 0162-8828 Google Scholar

25. 

W. Huang, Y. Y. Wang and R. Wang, “A high fidelity haze removal algorithm for optical satellite images using progressive transmission estimation based on the dark channel prior,” Int. J. Remote Sens., 40 3486 –3503 (2018). https://doi.org/10.1080/01431161.2018.1547451 IJSEDK 0143-1161 Google Scholar

26. 

G. Meng et al., “Efficient image dehazing with boundary constraint and contextual regularization,” in Proc. IEEE Conf. Comput. Vision, 617 –624 (2013). https://doi.org/10.1109/ICCV.2013.82 Google Scholar

27. 

L. Kratz and K. Nishino, “Factorizing scene Albedo and depth from a single foggy image,” in Proc. IEEE 12th Conf. Comput. Vision, 1701 –1708 (2013). https://doi.org/10.1109/ICCV.2009.5459382 Google Scholar

28. 

K. Nishino, L. Kratz and S. Lombardi, “Bayesian defogging,” Int. J. Comput. Vision, 98 263 –278 (2012). https://doi.org/10.1007/s11263-011-0508-1 IJCVEQ 0920-5691 Google Scholar

29. 

H. M. Lu et al., “Single image dehazing through improved atmospheric light estimation,” Multimedia Tools Appl., 75 17081 –17096 (2016). https://doi.org/10.1007/s11042-015-2977-7 Google Scholar

30. 

C. H. Yeh et al., “Haze effect removal from image via haze density estimation in optical model,” Opt. Express, 21 (22), 27127 –27141 (2013). https://doi.org/10.1364/OE.21.027127 OPEXFF 1094-4087 Google Scholar

31. 

W. Sun and L. Han, “A new fast single image defog algorithm,” in Proc. IEEE Conf. Intellect Syst. Des. and Eng. Appl., 116 –119 (2012). https://doi.org/10.1109/ISDEA.2012.35 Google Scholar

32. 

B. Li et al., “AOD-net: all-in-one dehazing network,” in IEEE Int. Conf. Comput. Vision, 4780 –4788 (2017). https://doi.org/10.1109/ICCV.2017.511 Google Scholar

33. 

B. Cai et al., “DehazeNet: an end-to-end system for single image haze removal,” IEEE Trans. Image Process., 25 (11), 5187 –5198 (2016). https://doi.org/10.1109/TIP.2016.2598681 IIPRE4 1057-7149 Google Scholar

34. 

Y. Xu et al., “Review of video and image defogging algorithms and related studies on image restoration and enhancement,” IEEE Access, 4 165 –188 (2016). https://doi.org/10.1109/ACCESS.2015.2511558 Google Scholar

35. 

S. G. Narasimhan and S. K. Nayar, “Removing weather effects from monochrome images,” in Proc. IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit., II-186 –II-193 (2001). https://doi.org/10.1109/CVPR.2001.990956 Google Scholar

36. 

S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit., 598 –605 (2000). https://doi.org/10.1109/CVPR.2000.855874 Google Scholar

37. 

S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Learn., 25 (6), 713 –724 (2003). https://doi.org/10.1109/TPAMI.2003.1201821 Google Scholar

38. 

S. G. Narasimhan and S. K. Nayar, “Interactive (de)weathering of an image using physical models,” in Proc. IEEE Workshop Color Photometric Methods Comput. Vision, 1 –8 (2003). Google Scholar

39. 

L. Schaul, C. Fredembach and S. Susstrunk, “Color image dehazing using the near-infrared,” in Proc. IEEE Conf. Image Process., 1 –4 (2009). https://doi.org/10.1109/ICIP.2009.5413700 Google Scholar

40. 

C. Feng et al., “Near-infrared guided color image dehazing,” in Proc. IEEE Conf. Image Process., 2363 –2367 (2013). https://doi.org/10.1109/ICIP.2013.6738487 Google Scholar

41. 

D. B. Chenault and J. L. Pezzaniti, “Polarization imaging through scattering media,” Proc. SPIE, 4133 124 –133 (2000). https://doi.org/10.1117/12.406619 PSISDG 0277-786X Google Scholar

42. 

Y. Y. Schechner, S. G. Narasimhan and S. K. Nayar, “Instant dehazing of images using polarization,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit., 325 –332 (2001). https://doi.org/10.1109/CVPR.2001.990493 Google Scholar

43. 

Y. Y. Schechner, S. G. Narasimhan and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt., 42 (3), 511 –525 (2003). https://doi.org/10.1364/AO.42.000511 APOPAI 0003-6935 Google Scholar

44. 

Y. Y. Schechner and N. Karpel, “Recovering scenes by polarization analysis,” in IEEE Techno-Ocean 04CH37600, (2004). https://doi.org/10.1109/OCEANS.2004.1405759 Google Scholar

45. 

T. Treibitz and Y. Y. Schechner, “Polarization: beneficial for visibility enhancement?,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit., 525 –532 (2009). https://doi.org/10.1109/CVPR.2009.5206551 Google Scholar

46. 

E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE, 5888 588805 (2005). https://doi.org/10.1117/12.617464 PSISDG 0277-786X Google Scholar

47. 

E. Namer, S. Shwartz and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express, 17 (2), 472 –493 (2009). https://doi.org/10.1364/OE.17.000472 OPEXFF 1094-4087 Google Scholar

48. 

J. Fade et al., “Long-range polarimetric imaging through fog,” Appl. Opt., 53 (18), 3854 –3865 (2014). https://doi.org/10.1364/AO.53.003854 APOPAI 0003-6935 Google Scholar

49. 

J. Mudge and M. Virgen, “Real time polarimetric dehazing,” Appl. Opt., 52 (9), 1932 –1938 (2013). https://doi.org/10.1364/AO.52.001932 APOPAI 0003-6935 Google Scholar

50. 

J. Liang et al., “Method for enhancing visibility of hazy images based on polarimetric imaging,” Photonics Res., 2 (1), 38 –44 (2014). https://doi.org/10.1364/PRJ.2.000038 Google Scholar

51. 

J. Liang et al., “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys., 116 (17), 173107 (2014). https://doi.org/10.1063/1.4901244 JAPIAU 0021-8979 Google Scholar

52. 

X. Li et al., “Imaging through haze utilizing a multi-aperture coaxial polarization imager,” in Front. Opt., (2018). Google Scholar

53. 

M. E. Ketara and S. Breugnot, “Imaging through haze using multispectral polarization imaging method,” Proc. SPIE, 10655 106550N (2018). https://doi.org/10.1117/12.2305541 PSISDG 0277-786X Google Scholar

54. 

M. J. Raković et al., “Light backscattering polarization patterns from turbid media: theory and experiment,” Appl. Opt., 38 (15), 3399 –3408 (1999). https://doi.org/10.1364/AO.38.003399 APOPAI 0003-6935 Google Scholar

55. 

Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng., 30 (3), 570 –587 (2005). https://doi.org/10.1109/JOE.2005.850871 IJOEDY 0364-9059 Google Scholar

56. 

J. Shen et al., “Polarization calculation and underwater target detection inspired by biological visual imaging,” Sens. Transd., 169 (4), 33 –41 (2014). Google Scholar

57. 

S. Fang et al., “Image dehazing using polarization effects of objects and airlight,” Opt. Express, 22 (16), 19523 –19538 (2014). https://doi.org/10.1364/OE.22.019523 OPEXFF 1094-4087 Google Scholar

58. 

W. Zhang et al., “A robust haze-removal scheme in polarimetric dehazing imaging based on automatic identification of sky region,” Opt. Laser Technol., 86 145 –151 (2016). https://doi.org/10.1016/j.optlastec.2016.07.015 OLTCAS 0030-3992 Google Scholar

59. 

W. F. Zhang et al., “Study of visibility enhancement of hazy images based on dark channel prior in polarimetric imaging,” Optik, 130 123 –130 (2017). https://doi.org/10.1016/j.ijleo.2016.11.047 OTIKAJ 0030-4026 Google Scholar

60. 

W. F. Zhang et al., “Real-time image haze removal using an aperture-division polarimetric camera,” Appl. Opt., 56 (4), 942 –947 (2017). https://doi.org/10.1364/AO.56.000942 APOPAI 0003-6935 Google Scholar

61. 

Y. Tian et al., “Underwater imaging based on LF and polarization,” IEEE Photonics J., 11 6500309 (2019). https://doi.org/10.1109/JPHOT.2018.2890286 Google Scholar

62. 

W. F. Zhang, J. Liang and L. Y. Ren, “Haze-removal polarimetric imaging schemes with the consideration of airlight’s circular polarization effect,” Optik, 182 1099 –1105 (2019). https://doi.org/10.1016/j.ijleo.2019.01.048 OTIKAJ 0030-4026 Google Scholar

63. 

L. Zhang et al., “Lane detection in dense fog using a polarimetric dehazing method,” Appl. Opt., 59 (19), 5702 –5707 (2020). https://doi.org/10.1364/AO.391840 APOPAI 0003-6935 Google Scholar

64. 

J. Liang et al., “Generalized polarimetric dehazing method based on low-pass filtering in frequency domain,” Sensors, 20 (6), 1729 (2020). https://doi.org/10.3390/s20061729 SNSRES 0746-9462 Google Scholar

65. 

Z. Liang et al., “Effective polarization-based image dehazing with regularization constraint,” IEEE Geosci. Remote Sens. Lett., 1 –5 (2020). https://doi.org/10.1109/LGRS.2020.3023805 Google Scholar

66. 

X. P. Fu et al., “Image descattering and absorption compsnsation in underwater polarimetric imaging,” Opt. Lasers Eng., 132 106115 (2020). https://doi.org/10.1016/j.optlaseng.2020.106115 Google Scholar

67. 

H. F. Hu et al., “Polarimetric underwater image recovery via deep learning,” Opt. Lasers Eng., 133 106152 (2020). https://doi.org/10.1016/j.optlaseng.2020.106152 Google Scholar

68. 

F. Liu et al., “Polarimetric dehazing utilizing spatial frequency segregation of images,” Appl. Opt., 54 (27), 8116 –8122 (2015). https://doi.org/10.1364/AO.54.008116 APOPAI 0003-6935 Google Scholar

69. 

S. Shwartz, E. Namer and Y. Y. Schechner, “Blind haze separation,” in Proc. Conf. Comput. Vision and Pattern Recognit., 1984 –1991 (2006). https://doi.org/10.1109/CVPR.2006.71 Google Scholar

70. 

R. Kaftory, Y. Y. Schechner and Y. Y. Zeevi, “Variational distance-dependent image restoration,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit., 1 –8 (2007). https://doi.org/10.1109/CVPR.2007.383262 Google Scholar

71. 

B. J. Huang et al., “Underwater image recovery considering polarization effects of objects,” Opt. Express, 24 (9), 9826 –9838 (2016). https://doi.org/10.1364/OE.24.009826 OPEXFF 1094-4087 Google Scholar

72. 

J. G. Guan et al., “Real-time polarization difference underwater imaging based on Stokes vector,” Acta Phys. Sin., 64 224203 (2015). https://doi.org/10.7498/aps.64.224203 WLHPAR 1000-3290 Google Scholar

73. 

P. C. Y Chang et al., “Improving visibility depth in passive underwater imaging by use of polarization,” Appl. Opt., 42 (15), 2794 –2803 (2003). https://doi.org/10.1364/AO.42.002794 APOPAI 0003-6935 Google Scholar

74. 

F. Wang, C. H. Yin and Y. Wang, “Research of polarization imaging detection method for water surface target in foggy weather,” Proc. SPIE, 8907 89074C (2013). https://doi.org/10.1117/12.2034719 PSISDG 0277-786X Google Scholar

75. 

J. G. Guan and J. P. Zhu, “Target detection in turbid medium using polarization-based range-gated technology,” Opt. Express, 21 (12), 14152 –14158 (2013). https://doi.org/10.1364/OE.21.014152 OPEXFF 1094-4087 Google Scholar

76. 

Y. Y. Schechner and N. Karpel, “Clear underwater vision,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit., 536 –543 (2004). https://doi.org/10.1109/CVPR.2004.1315078 Google Scholar

77. 

A. Sarafraz, S. Negahdaripour and Y. Y. Schechner, “Enhancing images in scattering media utilizing stereovision and polarization,” in IEEE Workshop Appl. Comput. Vision, 1 –8 (2009). https://doi.org/10.1109/WACV.2009.5403034 Google Scholar

78. 

D. Brousseau, J. Plant and S. Thibault, “Real-time polarization difference imaging (PDI) reveals surface details and textures in harsh environments,” Proc. SPIE, 8720 87200E (2013). https://doi.org/10.1117/12.2018226 PSISDG 0277-786X Google Scholar

79. 

W. J. Zhang et al., “Sky light polarization detection with linear polarizer triplet in light field camera inspired by insect vision,” Appl. Opt., 54 (30), 8962 –8970 (2015). https://doi.org/10.1364/AO.54.008962 APOPAI 0003-6935 Google Scholar

80. 

D. H. Goldstein, Polarized Light, 3rd edTaylor and Francis Group, New York (2011). Google Scholar

81. 

J. Liang et al., “Polarimetric dehazing method for dense haze removal based on distribution analysis of angle of polarization,” Opt. Express, 23 (20), 26146 –26157 (2015). https://doi.org/10.1364/OE.23.026146 OPEXFF 1094-4087 Google Scholar

82. 

C. X. Zhao et al., “Experimental comparison of polarization image restoration of three random angles and two orthogonal angles,” Laser Optoelectron. Prog., 52 101005 (2015). https://doi.org/10.3788/LOP52.101005 Google Scholar

83. 

L. Cao et al., “Dehazing method through polarimetric imaging and multi-scale analysis,” Proc. SPIE, 9501 950111 (2015). https://doi.org/10.1117/12.2176933 PSISDG 0277-786X Google Scholar

84. 

Y. Q. Zhao, Q. Pan and H. C. Zhang, “New polarization imaging method based on spatially adaptive wavelet image fusion,” Opt. Eng., 45 (12), 123202 (2006). https://doi.org/10.1117/1.2401625 Google Scholar

85. 

Y. Wang, M. G. Xue and Q. C. Huang, “Polarization dehazing algorithm based on atmosphere background suppression,” Comput. Eng., 35 (4), 271 –275 (2009). https://doi.org/10.1115/MNHMT2009-18287 Google Scholar

86. 

W. Z. Peng, “Polarization dehazing algorithm based on atmosphere scattering model,” Electron. Meas. Technol., 34 (7), 43 –45 (2011). Google Scholar

87. 

X. L. Zhang, Y. Xu and X. Z. Wang, “Research on image fusion based on polarization of haze,” J. Xiamen Univ., 50 (3), 520 –524 (2011). Google Scholar

88. 

J. Liang et al., “Polarimetric dehazing method for visibility improvement based on visible and infrared image fusion,” Appl. Opt., 55 (29), 8221 –8226 (2016). https://doi.org/10.1364/AO.55.008221 APOPAI 0003-6935 Google Scholar

89. 

W. F. Zhang et al., “Fast polarimetric dehazing method for visibility enhancement in HSI colour space,” J. Opt., 19 095606 (2017). https://doi.org/10.1088/2040-8986/aa7f39 Google Scholar

90. 

M. Ebner, Colour Constancy, John Wiley & Sons Ltd., Hoboken (2007). Google Scholar

91. 

J. S. Tyo et al., “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt., 45 (22), 5453 –5469 (2006). https://doi.org/10.1364/AO.45.005453 APOPAI 0003-6935 Google Scholar

92. 

N. Lefaudeux et al., “Compact and robust linear Stokes polarization camera,” Proc. SPIE, 6972 69720B (2008). https://doi.org/10.1117/12.781876 PSISDG 0277-786X Google Scholar

93. 

A. Jaulin and L. Bigue, “High speed partial Stokes imaging using a ferroelectric liquid crystal modulator,” J. Eur. Opt. Soc.-Rapid Publ., 3 08019 (2008). https://doi.org/10.2971/jeos.2008.08019] Google Scholar

94. 

P. Mukherjee et al., “Implementation of a complete Muller matrix polarimeter using dual photoelastic modulators and rotating wave plates,” Opt. Rev., 26 23 –32 (2019). https://doi.org/10.1007/s10043-018-0475-7 1340-6000 Google Scholar

95. 

M. Vedel, S. Breugnot and N. Lechocinski, “Full Stokes polarization imaging camera,” Proc. SPIE, 8160 81600X (2011). https://doi.org/10.1117/12.892491 PSISDG 0277-786X Google Scholar

96. 

J. Mudge, M. Virgen and P. Dean, “Near-infrared simultaneous Stokes imaging polarimeter,” Proc. SPIE, 7461 74610L (2009). https://doi.org/10.1117/12.828437 PSISDG 0277-786X Google Scholar

97. 

X. Li et al., “Research on polarization dehazing through the coaxial and multi-aperture polarimetric camera,” OSA Continuum, 2 (8), 2369 –2380 (2019). https://doi.org/10.1364/OSAC.2.002369 Google Scholar

98. 

R. Perkins and V. Gruev, “Signal-to-noise analysis of Stokes parameters in division of focal plane polarimeters,” Opt. Express, 18 (25), 25815 –25824 (2010). https://doi.org/10.1364/OE.18.025815 OPEXFF 1094-4087 Google Scholar

99. 

G. Myhre et al., “Liquid crystal polymer full-Stokes division of focal plane polarimeter,” Opt. Express, 20 (25), 27393 –27409 (2012). https://doi.org/10.1364/OE.20.027393 OPEXFF 1094-4087 Google Scholar

100. 

C. H. Xu et al., “Numerical study of a DoFP polarimeter based on the self-organized nanograting array,” Opt. Express, 26 (3), 2517 –2527 (2018). https://doi.org/10.1364/OE.26.002517 OPEXFF 1094-4087 Google Scholar

101. 

K. Oka and N. Saito, “Snapshot complete imaging polarimeter using Savart plates,” Proc. SPIE, 6295 629508 (2006). https://doi.org/10.1117/12.683284 PSISDG 0277-786X Google Scholar

102. 

H. T. Luo et al., “Compact and miniature snapshot imaging polarimeter,” Appl. Opt., 47 (24), 4413 –4417 (2008). https://doi.org/10.1364/AO.47.004413 APOPAI 0003-6935 Google Scholar

103. 

M. Honma, N. Takahashi and T. Nose, “Simple Stokes polarimeter using liquid crystal grating with ternary orientation domains,” Appl. Opt., 57 (35), 10183 –10190 (2018). https://doi.org/10.1364/AO.57.010183 APOPAI 0003-6935 Google Scholar

104. 

J. D. Perreault, “Triple Wollaston-prism complete-Stokes imaging polarimeter,” Opt. Lett., 38 (19), 3874 –3877 (2013). https://doi.org/10.1364/OL.38.003874 OPLEDP 0146-9592 Google Scholar

105. 

M. M. Ali, M. A. U. Rahman and S. Hajera, “A comparative study of various image dehazing techniques,” in Int. Conf. Energy, Communication, Data Analytics and Soft Comput., 3622 –3628 (2017). https://doi.org/10.1109/ICECDS.2017.8390138 Google Scholar

106. 

N. Hautière et al., “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Anal. Stereol. J., 27 (2), 87 –95 (2008). https://doi.org/10.5566/ias.v27.p87-95 Google Scholar

107. 

X. Yu et al., “A classification algorithm to distinguish image as haze or non-haze,” in Proc. IEEE Int. Conf. Image Graphics, 286 –289 (2011). https://doi.org/10.1109/ICIG.2011.22 Google Scholar

108. 

T. L. Economopoulos, P. A. Asvestas and G. K. Matsopoulos, “Contrast enhancement of images using partitioned iterated function systems,” Image Vision Comput., 28 (1), 45 –54 (2010). https://doi.org/10.1016/j.imavis.2009.04.011 Google Scholar

109. 

D. J. Jobson et al., “A comparison of visual statistics for the image enhancement of FORESITE aerial images with those of major image classes,” Proc. SPIE, 6246 624601 (2006). https://doi.org/10.1117/12.664591 PSISDG 0277-786X Google Scholar

110. 

A. Mittal, R. Soundararajan and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett., 20 (3), 209 –212 (2013). https://doi.org/10.1109/LSP.2012.2227726 Google Scholar

111. 

Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett., 9 (3), 81 –84 (2002). https://doi.org/10.1109/97.995823 IESPEJ 1070-9908 Google Scholar

Biography

Wenfei Zhang received his BS degree in applied physics in 2009 and his MS degree in condensed physics in 2012 from Qingdao University, and his PhD in electric science and technology from Xi’an Jiaotong University, China. His current work is the polarimetric imaging and polarimetric dehazing methods in turbid media.

Jian Liang received his PhD in optics from Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, China. His current work is polarimetric imaging, polarimetric dehazing methods in turbid media, and the polarimetric imaging systems.

Guomei Wang received her PhD in optics from Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, China. Her current work is the fiber laser technique.

Huanian Zhang received his PhD from Shandong University, China. His current research interest is the ultrafast fiber laser and its application

Shenggui Fu received his MS and PhD degrees from Nankai University, China. He is a postdoctor at Nankai University. His current research interest is the laser technology and optical fiber photonics.

© 2021 Society of Photo-Optical Instrumentation Engineers (SPIE)
Wenfei Zhang, Jian Liang, Guomei Wang, Huanian Zhang, and Shenggui Fu "Review of passive polarimetric dehazing methods," Optical Engineering 60(3), 030901 (13 March 2021). https://doi.org/10.1117/1.OE.60.3.030901
Received: 14 December 2020; Accepted: 26 February 2021; Published: 13 March 2021
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Polarimetry

Image fusion

Image quality

Air contamination

Polarization

Optical engineering

Image processing

RELATED CONTENT


Back to Top