10 April 2018 Long-wave infrared polarization feature extraction and image fusion based on the orthogonality difference method
Author Affiliations +
Infrared polarization results from infrared-emitted radiation and reflected radiation effects. Polarization generated by infrared reflection is perpendicularly polarized, whereas polarization generated by infrared emission is parallelly polarized. Using the polarization feature in different directions can enhance the detection and discrimination of the target. Based on the Stokes vector, the polarization degree and angle are obtained. Then, according to the analysis of the polarization states, an orthogonality difference method of extracting polarization features is proposed. An infrared intensity and polarization feature images are fused using an algorithm of nonsubsampled shearlets transformation. Image evaluation indices of the target contrast to background (C), average gradient (AG), and image entropy (E) are employed to evaluate the fused image and original intensity image. Results demonstrate that every index of the fused image with the polarization feature is significantly improved, thereby validating the effectiveness of the proposed target-enhancement approach using polarization features extracted by the orthogonal difference method.



Infrared imaging technology is widely applied in military and civilian fields, such as remote sensing, military surveillance, and military guidance, on account of its advantages of passive detection, strong antidisturbance ability, all-weather and full-time operation, and significant robustness.1 Traditional infrared intensity imaging is performed by employing the temperature difference between the target and the background.1,2 However, if the background is complex or the temperature difference is small, traditional infrared intensity-based imaging is limited. Infrared polarization imaging technology has been rapidly developed in recent years.1 Compared with infrared intensity-based imaging technology, infrared polarization-based technology can provide more valuable information for detection and discrimination of targets, including types of material and surface roughness.3 Different objects have different polarization characteristics. Human-produced targets tend to have stronger polarization as their surfaces are smooth, whereas natural objects tend to be mostly unpolarized or have different polarization characteristics from human-produced ones.45.6 Therefore, infrared polarization technology has great potential in target detection applications.5

In recent years, optical processing of images has received considerable attention. Experimental breakthroughs have been achieved mainly in deep learning, three-dimensional holography, pattern recognition, image compression, image encryption, etc.78.9.10 As a new kind of image processing technology, polarization imaging can be used to enhance image quality.3 Polarization studies recently conducted on visible light and color images have achieved improved effectiveness. Dubreuil et al.11 and Leonard et al.12 conducted experiments on ocean optics and underwater targets based on polarization-aided image processing. Impressive results were achieved in detection and identification of sea mines. Schechner et al.1314.15 exploited imaging technology in poor visibility conditions and they found that polarization imaging can be used to enhance visibility. Their related polarization experiments done under poor atmospheric and underwater conditions proved the effectiveness of infrared polarization imaging in visibility improvement. In addition to visible light polarization, infrared polarization has also been further developed for multiple applications, particularly in fusion technology of intensity imaging and polarization imaging. Lin et al.16 investigated the enhancement of an infrared image using polarization information. They proposed a fusion method of infrared polarization and intensity images using embedded multiscale transform. Experimental results showed that their method could add the polarization characteristics into the original intensity image, and increase the image contrast and clarity. Yue and Li17 presented a method based on oriented Laplacian pyramid image fusion between the infrared radiation intensity image and degree of polarization image, and the results showed that the method could increase the amount of information of the intensity image. Despite the above advancements, these existing fusion methods directly employ the degree of polarization (DOLP) and angle of polarization (AOP) to enhance the target. The effect of enhancement is limited because the DOLP and AOP are sensitive to noise clutter and the detected angle.4,5 Furthermore, using only images of DOLP and AOP cannot provide comprehensive information of the given scenarios.

A method of orthogonality difference based on decomposition of partial infrared polarization is proposed to extract polarization features in different directions. The direction of polarization generated by infrared emitted radiation is predominantly parallel, whereas the direction of polarization generated by infrared reflected radiation is mainly perpendicular.18,19 The respective polarization parallel and perpendicular components expressed the different characteristics of the scenario. By extracting the parallel and perpendicular polarization feature images can obtain valuable information about the scenario.

Nonsubsampled shearlets transformation (NSST) is the most optimal multiscale decomposition method.20 It can more effectively express image details, such as lines, edges, and contours.21 Infrared intensity imaging and polarization imaging provide different yet complementary information about objects. By employing NSST, the respective extracted polarization parallel and perpendicular component and the infrared intensity images are fused. Their respective merits of infrared intensity and polarization images can be effectively combined. Experimental results demonstrate that the target is significantly enhanced in the fused image, which shows stronger contrast, more details, and an overall better visual effect.


Polarization Theory and Stokes Vector

A light wave can be decomposed into two components orthogonal to the propagation direction. The difference of the two orthogonal components results in the polarization of light. The state of polarization can be described by the Stokes vector, which includes four parameters I,Q,U, and V.22 Generally, most polarization phenomena in nature are linearly polarized, and the V component is so small that it is always ignored. The first three Stokes parameters can be derived using the different polarization directions, 0 deg, 60 deg, and 120 deg, as shown in Eq. (1)23


where I0,I60, and I120 are the measured infrared radiation intensities with the polarizer oriented at 0 deg, 60 deg, and 120 deg, respectively.

Based on the Stokes vector, the degree of polarization and the AOP are defined by4,24


where P is the degree of polarization and A is the AOP.


Extraction of Polarization Features Based on Orthogonal Difference


Algorithm of Orthogonal Difference

The infrared polarization generated by the infrared reflected effect and emitted effect is partially polarized. It can be decomposed into a natural light (unpolarized) component and a completely polarized component. Compared with the unpolarized component, the latter can provide more valuable information with respect to the detection and discrimination of the target




where θ is the polarizing angle and Iθ is the partially polarized light with the polarizing angle oriented at θ. Additionally, INθ is the natural light component, IPθ denotes the completely polarized component, and A represents the AOP, respectively.

The image of orthogonality difference Iθ is defined by




where Iθ and Iθ+π/2 represent the infrared intensity image obtained in the polarizing angles θ and θ+π/2, respectively. P is the degree of polarization


According to Eq. (7), the value of Iθ is determined by total intensity I, degree of polarization P, AOP θ, and the polarizing angle A, respectively. After the orthogonality difference operation, the natural light component is eliminated, and the polarization feature is preserved, which can reflect the polarization characteristics of the objects. In contrast to the definition of Stokes’ parameters Q,U, it is evident that the essential calculation of Q,U is the orthogonality difference process, especially when θ=0,Iθ=Q and θ=45  deg,Iθ=U. Using the method of orthogonality difference, we can obtain the infrared intensity difference in any two orthogonal directions by selecting different polarizing angles.

As shown in Fig. 1(a), the infrared radiation of the target and background is partially polarized, where the degree of polarization and AOP are 0.33, 30 deg, and 0.25, 90 deg, respectively. They show a large overlap area, and it is difficult to separate the target from the background. In Fig. 1(b), after decomposing partially polarization, the completely polarized component can be obtained, where the polarization characteristics of objects can be obtained. Figure 1(c) shows the orthogonality difference result. The target and background are distinctly separated, and it is easy to extract the respective features of the target and background polarization in this situation.

Fig. 1

Decomposition of partially polarized light. (a) Partially polarized light, (b) completely polarized component, and (c) result of the orthogonality difference.



Method of Polarization Feature Extraction

Using Eq. (7), we can obtain only polarization features in a single direction. Because the distribution of the surface orientations of the target and background is random and irregular in nature, it is difficult to obtain all valuable information using the polarization feature in only one direction. To obtain all directional polarization features, an algorithm of the weighted addition is proposed as follows:


where If represents the image of extracted polarization features and mθ is the weighted coefficient in the direction of θ. Δθ represents sampling interval, and K=πΔθ, k=1,2,,K, · represents rounding operation. By setting the value of weighted coefficient mθ, we can extract specific direction polarization features. To improve the calculation speed, in this paper we set the sampling interval Δθ to 2 deg.


Extraction of Polarization Features in Different Directions

Infrared polarization results from the infrared emitted effect and the reflected effect on the surface of objects.5,18 According to the Fresnel formula19,25




where Rp and Rs represent the parallel reflection rate and perpendicular reflection rate, respectively. n1 is the refraction rate of the incident medium and n2 is the refraction rate of the transmission medium. Moreover, α1 is the incident angle and α2 is the refraction angle.

The degree of polarization PR generated by the reflection is defined as16,2425.26.27.28



According to the energy conservation law and Kirchhoff’s law, when the transmission is negligible, the sum of reflection rate R and emission rate ϵ is equal to one19


where ϵp and ϵs represent the parallel emission and perpendicular emission rates, respectively.

The degree of polarization PE generated by the emission is defined as16,19,25



According to the above analysis, we simulated reflection and emission polarizations of ideal metal steel plates, respectively. The parallel and perpendicular polarization states are, respectively, perpendicular and parallel to the plane of incidence defined by the propagation vector of the detected radiance and surface normal.19,2728.29

The simulation results in Fig. 2 show that, in the case of reflected radiation, the perpendicular reflection rate (S-reflectivity) is greater than the parallel reflection rate (P-reflectivity), and the polarization is perpendicularly polarized. For emission radiation, the parallel emission rate (P-emissivity) is greater than the perpendicular emission rate (S-emissivity), and the polarization direction is predominantly parallel. Therefore, the respective polarization parallel and perpendicular components express the different characteristics of the scenario. By setting weighted coefficient mθ, we extract the polarization parallel feature image and perpendicular feature image, respectively.

  • 1. Extraction of polarization parallel component Ifp

    The image of the polarization parallel component can be extracted by setting the weighted coefficient mθ=abs(cosθ), where abs() is absolute value operation, especially if θ=0, mθ=1 and θ=π/2, mθ=0



  • 2. Extraction of polarization perpendicular component Ifs

    The image of the polarization parallel component can be extracted by setting the weighted coefficient mθ=abs(sinθ), especially if θ=0, mθ=0 and θ=π/2, mθ=1



Fig. 2

Simulation curves of the polarization degree generated by reflection and emission of an ideal steel surface. (a) Reflected polarization and (b) emitted polarization.



Experiment on Extraction of Polarization Features

To validate the enhancement effect of extracting polarization features by the orthogonality difference method, two typical scenarios were recorded. The target shown in Fig. 3 is a car in an outdoor area; it was selected because the car target has rich details. The target in Fig. 4 is a house in a forest; it was selected because the vegetation background is complex.

Fig. 3

Infrared intensity images of car in outdoor areas. (a) Infrared intensity image, (b) 0-deg image, (c) 60-deg image, and (d) 120-deg image.


Fig. 4

Infrared intensity images of house in the forest. (a) Infrared intensity image, (b) 0-deg image, (c) 60-deg image, and (d) 120-deg image.


With the use of a long-wave infrared polarization detection system, we obtained infrared intensity images in the directions of 0 deg, 60 deg, and 120 deg. Owing to the block effect of the polarizer, the contrast of the target with respect to the background in different polarizing angle decreased slightly in the polarization images compared with the original infrared intensity image. According to Eqs. (1) and (2), the degree of polarization (DOLP) images and the AOP images can be obtained, as shown in Fig. 5.

Fig. 5

Images of polarization information in different scenarios. (a) DOLP image of a car, (b) AOP image of a car, (c) DOLP image of a house, and (d) AOP image of a house.


As shown in Fig. 5(a), the degree of polarization of the car window is stronger than that of other objects. In Fig. 5(b), the polarization angle of the car wheels is different from that of other objects. Figures 5(c) and 5(d) show the degree of polarization and the AOP of a house. Because the degree of polarization and the AOP are related to the surface viewing angle, it is evident that the roof and different walls of the house have different states of polarization, which are different from that of the surrounding vegetation. Moreover, because the AOP is sensitive to noise clutter, it has a worse visual effect. Therefore, we used the method of orthogonal difference to extract the polarization parallel features and perpendicular features of the above scenarios.

Figure 6(a) shows the polarization parallel component image of a car in an outdoor area. Compared with the image of the degree of polarization, the image of the polarization parallel component has richer details and information. Figure 6(b) shows the polarization perpendicular component in this scenario. It is apparent that the plant and ground polarization is focused in the perpendicular direction, and the car window contrast is strong. Figures 6(c) and 6(d) show the polarization parallel component and perpendicular component of a house in a forest, respectively. In the parallel direction, the house is highlighted and background vegetation is suppressed. In the perpendicular direction, the vegetation and road have strong polarization. The road, in particular, cannot be observed in the infrared intensity image; however, it is distinct in the perpendicular component image.

Fig. 6

Polarization component images in different scenarios. (a) Parallel component image of a car, (b) perpendicular component image of a car, (c) parallel component image of a house, and (d) perpendicular component image of a house.


Fig. 7

Fusion process using NSST.


The above results indicate that the polarization features in both parallel and perpendicular directions can be effectively extracted using the proposed method of orthogonal difference. Furthermore, the method is helpful in obtaining more information about the target and background. Moreover, noise in the polarization feature image extracted using the method is less than that in the polarization angle image, and the edges are more distinct. Thus, the image extracted by the proposed method can be fused to enhance the target.


Fusion Strategy of Intensity Images and Polarization Images

A polarization image reflects the types of material, surface roughness, and detection angle, whereas the infrared intensity image reflects the radiation capability. Owing to the respective complementary attributes of these two image types, the NSST algorithm is applied to fuse them and thereby integrate their individual merits.2130.31

As shown in Fig. 7, the classic fused strategy is adopted to fuse the extracted polarization parallel component image, extracted polarization perpendicular component image, and infrared intensity image.32


Low-Frequency Coefficients of Fused Image FIL(i,j)

The decomposed low-frequency components are appropriate parts of the image and reflect the energy distribution. The average rule is used to fuse the low-infrequency components


where (i,j) denotes the pixel location, IIL comprises the low-frequency coefficients of the infrared intensity image, and IPL represents the low-frequency coefficients of the polarization image, respectively.


Fusion Strategy for High-Frequency Components

The high-frequency components reflect the details, textures, and edges of the image. The fusion rule of selecting the larger absolute is applied to fuse the high-frequency components to simultaneously preserve the details of the infrared intensity and polarization images.

Considering the problem of noise in polarization images, we use an adaptive adjustment coefficient, w(i,j), to optimize the energy distribution and denoise the high-frequency components of the polarization feature images




where IIH and IPH are the high-frequency coefficients of the infrared intensity and polarization images, respectively, and WPH denotes the high-frequency coefficients of the adjusted polarization image.

The high-frequency coefficients of fused image FIL(i,j) are calculated by




Fusion Result with Polarization Features


Analysis of Fusion Result with Polarization Features

Figures 8(a) and 8(c) are the original infrared intensity images, whereas Figs. 8(b) and 8(d) are the images fused with the NSST algorithm. Figure 8(b) shows that the image fused with polarization features has richer details. In addition, it can be observed that the car windows are enhanced and the contrast of the stairs with respect to the ground is stronger than in the original intensity image. Figure 8(d) shows that that house and road targets are significantly enhanced. This is especially the case for the road in front of the house; it is embedded in the vegetation but can nevertheless be clearly viewed in the fused image. After fusing the polarization features, the contrast of the house target with regard to the vegetation background is improved.

Fig. 8

Contrast result of the fused images in different scenarios. (a) Intensity image of a car, (b) fused image of a car, (c) intensity image of a house, and (d) fused image of a house.


The evaluation indices of the target contrast (C) with regard to the background (the ratio of average gray value of car/house to that of background), average gradient (AG), and image entropy (E)33,34 for quantitatively evaluating the performances of the fused images are shown in Table 1.

Table 1

Comparison of evaluation indices between infrared images and fused images.

Types of scenariosEvaluation index0-deg image60-deg image120-deg imageIntensity imagesFused images
Car in outdoor areaC0.01320.01650.01480.04730.0587
House in a forestC0.07530.10890.08960.19120.3583

Bold values show the maximum of evaluation index.

The evaluation results demonstrate that, compared with the infrared intensity images, every fused image index is significantly improved. This is because the fused image combines the information of polarization features. Accordingly, the target is highlighted, the respective details in the intensity image and polarization feature images are retained, and the overall visual effect is enhanced.


Comparison and Analysis Using Existing Methods

To further measure the performance of the fusion method, it is compared with two methods in Refs. 16 and 17. The results of the three methods are shown in Fig. 9 and Table 2.

Fig. 9

Fusion results of different methods. (a) Fusion result of a car by Ref. 16, (b) fusion result of a car by Ref. 17, (c) fusion result of a car by the proposed method, (d) fusion result of a house by Ref. 16, (e) fusion result of a house by Ref. 17, and (f) fusion result of a house by the proposed method.


Table 2

Comparison of evaluation indices using different fused methods.

Types of scenariosEvaluation indexOriginal intensity imageMethod in Ref. 16Method in Ref. 17Proposed method
Car in outdoor areaC0.04730.04920.04850.0587
House in a forestC0.19120.28930.26170.3583

Bold values show the maximum of evaluation index.

It is clear that the proposed method achieves a better visual effect and higher contrast of the target in relation to the background. Although the fused method in Ref. 16 can enhance the image quality and produce a higher contrast ratio than the original intensity image, the fused images are affected by the random noise of polarization degree image. The fused method in Ref. 17 can integrate the information of the intensity image and polarization degree image; nevertheless, it fails to improve the contrast ratio. The limitation of the two existing fusion methods is that the polarization difference mechanism is not thoroughly explored, which limits improvement in the contrast and image quality.

Our proposed method, on the other hand, fully explores the infrared polarization principle and polarization imaging mechanisms. It can thereby effectively extract polarization features in both parallel and perpendicular directions. By employing NSST, the respective extracted polarization parallel and perpendicular component images and the infrared intensity image are fused. Experimental results demonstrate that the contrast, average gradient, and image entropy of the fused image by the proposed method are higher than those of the other methods, thus validating its effectiveness in enhancing targets with the polarization features extracted using the orthogonality difference method.



Polarization imaging can reflect the inherent information of an object, such as the type of material and surface roughness, and different objects show different characteristics of polarization. The use of polarization features can enhance the target detection performance. In this paper, an orthogonal difference method based on the infrared reflected radiation and emitted radiation effects was proposed to extract the polarization features in different directions. Experimental results demonstrated that the proposed method effectively extracted respective polarization parallel and perpendicular features, which outperformed the degree of polarization and the AOP. By employing the NSST algorithm, we fused the intensity image, polarization parallel component image, and perpendicular component image. The fusion result demonstrated that every image evaluation index, including the target contrast with respect to the background, average gradient, and image entropy, was remarkably improved. This finding verifies that the proposed orthogonality difference method has better adaptability and usefulness in extracting features of polarization, and the resulting fused polarization image is effective in enhancing the information of target scenarios.


This paper was supported by the Natural Science Foundation of China (61302145) and Preresearch Fund for Weapons and Equipment (9140C800302KG01).


1. F. Snik et al., “Overview of polarimetric sensing techniques and technology with applications to different research fields,” Proc. SPIE 9099, 90990B (2014).PSISDG0277-786X https://doi.org/10.1117/12.2053245 Google Scholar

2. T. Shibata and M. Tanaka, “Versatile visible and near-infrared image fusion based on high visibility area selection,” J. Electron. Imaging 25(1), 013016 (2016).JEIME51017-9909 https://doi.org/10.1117/1.JEI.25.1.013016 Google Scholar

3. T. J. Rogne, “Passive target detection using polarized components of infrared signatures,” Proc. SPIE 1317, 242–251 (1990).PSISDG0277-786X https://doi.org/10.1117/12.22061 Google Scholar

4. K. P. Gurton and A. Dahmani, “Effect of surface roughness and complex indices of refraction on polarized thermal emission,” Appl. Opt. 44(26), 5361–5367 (2005). https://doi.org/10.1364/AO.44.005361 Google Scholar

5. Y. Zhang, Z. Shi and T. Qiu, “Infrared small target detection method based on decomposition of polarization information,” J. Electron. Imaging 26(3), 033004 (2017).JEIME51017-9909 https://doi.org/10.1117/1.JEI.26.3.033004 Google Scholar

6. B. Fougnie et al., “Measurement and computations of the polarized marine reflectance,” Proc. SPIE 4133, 191–201 (2000).PSISDG0277-786X https://doi.org/10.1117/12.406626 Google Scholar

7. M. Dubreuil, A. Alfalou and C. Brosseau, “Secure optical encryption of images using the Stokes-Mueller formalism,” Biophys. J. 102(3), 403 (2012).BIOJAU0006-3495 Google Scholar

8. W. Liu et al., “Scale-adaptive compressive tracking with feature integration,” J. Electron. Imaging 25(3), 033018 (2016).JEIME51017-9909 https://doi.org/10.1117/1.JEI.25.3.033018 Google Scholar

9. A. Alfalou and C. Brosseau, “Recent advances in optical image processing,” Prog. Opt. 60, 119–262 (2015).POPTAN0079-6638 https://doi.org/10.1016/bs.po.2015.02.002 Google Scholar

10. X. Chen et al., “Distinctive local surface descriptor for three-dimensional objects based on bispectrum of spherical harmonics,” J. Electron. Imaging 25(1), 013021 (2016).JEIME51017-9909 https://doi.org/10.1117/1.JEI.25.1.013021 Google Scholar

11. M. Dubreuil et al., “Exploring underwater target detection by imaging polarimetry and correlation techniques,” Appl. Opt. 52(5), 997 (2013).APOPAI0003-6935 https://doi.org/10.1364/AO.52.000997 Google Scholar

12. I. Leonard, A. Alfalou and C. Brosseau, “Sensitive test for sea mine identification based on polarization-aided image processing,” Opt. Express 21(24), 29283–29297 (2013).OPEXFF1094-4087 https://doi.org/10.1364/OE.21.029283 Google Scholar

13. Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005).IJOEDY0364-9059 https://doi.org/10.1109/JOE.2005.850871 Google Scholar

14. E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE 5888, 588805 (2005).PSISDG0277-786X https://doi.org/10.1117/12.617464 Google Scholar

15. E. Namer, S. Shwartz and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express 17(2), 472 (2009).OPEXFF1094-4087 https://doi.org/10.1364/OE.17.000472 Google Scholar

16. S. Z. Lin et al., “Fusion of infrared intensity and polarization images using embedded multi-scale transform,” Optik 126(24), 5127–5133 (2015). https://doi.org/10.1016/j.ijleo.2015.09.154 Google Scholar

17. Z. Yue and F.M. Li, “An infrared polarization image fusion algorithm based on oriented Laplacian pyramid,” Proc. SPIE 9142, 914208 (2014).PSISDG0277-786X https://doi.org/10.1117/12.2054074 Google Scholar

18. C.A.W. Lentz et al., “Infrared polarization measurement of ship signature and background contrast,” Proc. SPIE 2223, 301–309 (1994).PSISDG0277-786X https://doi.org/10.1117/12.177924 Google Scholar

19. O. Sandus, “A review of emission polarization,” Appl. Opt. 4(12), 1634–1642 (1965). https://doi.org/10.1364/AO.4.001634 Google Scholar

20. M. I. Smith and J. P. Heather, “A review of image fusion technology in 2005,” Proc. SPIE 5782, 29–46 (2005).PSISDG0277-786X https://doi.org/10.1117/12.597618 Google Scholar

21. D. Labate et al., “Sparse multidimensional representation using Shearlets,” Proc. SPIE 5914, 59140U (2005).PSISDG0277-786X https://doi.org/10.1117/12.613494 Google Scholar

22. F. A. Sadjadi and C. S. L. Chun, “Remote sensing using passive infrared Stokes parameters,” Opt. Eng. 43(10), 2283–2291 (2004). https://doi.org/10.1117/1.1782614 Google Scholar

23. J. D. Howe et al., “Polarization sensing for target acquisition and mine detection,” Proc. SPIE 4133, 202–213 (2000).PSISDG0277-786X https://doi.org/10.1117/12.406627 Google Scholar

24. Y. Aron and Y. Gronau, “Polarization in the LWIR: a method to improve target aquisition,” Proc SPIE 5783, 653–661 (2005). https://doi.org/10.1117/12.605316 Google Scholar

25. A. Resnick, C. Persons and G. Lindquist, “Polarized emissivity and Kirchhoff’s law,” Appl. Opt. 38(8), 1384–1387 (1999). https://doi.org/10.1364/AO.38.001384 Google Scholar

26. J. S. Tyo et al., “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006). https://doi.org/10.1364/AO.45.005453 Google Scholar

27. N. Kong, Y. W. Tai and S. Y. Shin, “High-quality reflection separation using polarized images,” IEEE Trans. Image Process. 20(12), 3393–3405 (2011).IPRSEW https://doi.org/10.1109/TIP.2011.2155080 Google Scholar

28. L. B. Wolff, “Polarization-based material classification from specular reflection,” IEEE Trans. Pattern Anal. Mach. Intell. 12(11), 1059–1071 (1990). https://doi.org/10.1109/34.61705 Google Scholar

29. J. A. Shaw, “Degree of linear polarization in spectral radiances from water-viewing infrared radiometers,” Appl. Opt. 38(15), 3157–3165 (1999).APOPAI0003-6935 https://doi.org/10.1364/AO.38.003157 Google Scholar

30. S. Z. Lin et al., “Fusion of infrared intensity and polarization images using embedded multi-scale transform,” Optik 126(24), 5127–5133 (2015). https://doi.org/10.1016/j.ijleo.2015.09.154 Google Scholar

31. W.W. Kong, “Technique for image fusion based on NSST domain INMF,” Optik 125, 2716–2722 (2014).OTIKAJ0030-4026 https://doi.org/10.1016/j.ijleo.2013.11.025 Google Scholar

32. G. Easly, D. Labate and W. Q. Lim, “Sparse directional image representations using the discrete Shearlet transform,” Appl. Comput. Harmon. Anal. 25(1), 25–46 (2008).ACOHE91063-5203 https://doi.org/10.1016/j.acha.2007.09.003 Google Scholar

33. Y. Zhang and T.-W. Qiu, “Infrared surface target enhancement based on virtual variational polarization,” Syst. Eng. Electron. 37(5), 992–997 (2015). Google Scholar

34. U. M. Braganeto and J. I. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13(4), 802 (2004).JEIME51017-9909 https://doi.org/10.1117/1.1789982 Google Scholar


JingHua Zhang received his BE degree in electrical engineering and its automation from Huazhong University of Science and Technology, in 2016. Currently, he is an ME student in the ATR Laboratory, NUDT. His research interests are infrared image processing and automatic target recognition.

Yan Zhang is a professor at the ATR Key Laboratory, National University of Defense Technology (NUDT). She received her BS and MS degrees in physics from NUDT in 1997 and 2003, respectively, and her PhD in 2008. She is the author of more than 30 journal papers. Her current research interests include optical image processing. She is a member of SPIE.

ZhiGuang Shi received his BE degree in automatic control from Shijiazhuang Mechanical Engineering College, Shijiazhuang, China, in 1996 and his ME and PhD degrees in information and telecommunication systems from NUDT, Changsha, China, in 2002 and 2007, respectively. His research interest includes ladar and infrared image processing, radar clutter modeling, and statistical analysis.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Jing-Hua Zhang, Jing-Hua Zhang, Yan Zhang, Yan Zhang, Zhiguang Shi, Zhiguang Shi, } "Long-wave infrared polarization feature extraction and image fusion based on the orthogonality difference method," Journal of Electronic Imaging 27(2), 023021 (10 April 2018). https://doi.org/10.1117/1.JEI.27.2.023021 . Submission: Received: 18 December 2017; Accepted: 19 March 2018
Received: 18 December 2017; Accepted: 19 March 2018; Published: 10 April 2018


Back to Top