Infrared imaging technology is widely applied in military and civilian fields, such as remote sensing, military surveillance, and military guidance, on account of its advantages of passive detection, strong antidisturbance ability, all-weather and full-time operation, and significant robustness.1 Traditional infrared intensity imaging is performed by employing the temperature difference between the target and the background.1,2 However, if the background is complex or the temperature difference is small, traditional infrared intensity-based imaging is limited. Infrared polarization imaging technology has been rapidly developed in recent years.1 Compared with infrared intensity-based imaging technology, infrared polarization-based technology can provide more valuable information for detection and discrimination of targets, including types of material and surface roughness.3 Different objects have different polarization characteristics. Human-produced targets tend to have stronger polarization as their surfaces are smooth, whereas natural objects tend to be mostly unpolarized or have different polarization characteristics from human-produced ones.45.–6 Therefore, infrared polarization technology has great potential in target detection applications.5
In recent years, optical processing of images has received considerable attention. Experimental breakthroughs have been achieved mainly in deep learning, three-dimensional holography, pattern recognition, image compression, image encryption, etc.78.9.–10 As a new kind of image processing technology, polarization imaging can be used to enhance image quality.3 Polarization studies recently conducted on visible light and color images have achieved improved effectiveness. Dubreuil et al.11 and Leonard et al.12 conducted experiments on ocean optics and underwater targets based on polarization-aided image processing. Impressive results were achieved in detection and identification of sea mines. Schechner et al.1314.–15 exploited imaging technology in poor visibility conditions and they found that polarization imaging can be used to enhance visibility. Their related polarization experiments done under poor atmospheric and underwater conditions proved the effectiveness of infrared polarization imaging in visibility improvement. In addition to visible light polarization, infrared polarization has also been further developed for multiple applications, particularly in fusion technology of intensity imaging and polarization imaging. Lin et al.16 investigated the enhancement of an infrared image using polarization information. They proposed a fusion method of infrared polarization and intensity images using embedded multiscale transform. Experimental results showed that their method could add the polarization characteristics into the original intensity image, and increase the image contrast and clarity. Yue and Li17 presented a method based on oriented Laplacian pyramid image fusion between the infrared radiation intensity image and degree of polarization image, and the results showed that the method could increase the amount of information of the intensity image. Despite the above advancements, these existing fusion methods directly employ the degree of polarization (DOLP) and angle of polarization (AOP) to enhance the target. The effect of enhancement is limited because the DOLP and AOP are sensitive to noise clutter and the detected angle.4,5 Furthermore, using only images of DOLP and AOP cannot provide comprehensive information of the given scenarios.
A method of orthogonality difference based on decomposition of partial infrared polarization is proposed to extract polarization features in different directions. The direction of polarization generated by infrared emitted radiation is predominantly parallel, whereas the direction of polarization generated by infrared reflected radiation is mainly perpendicular.18,19 The respective polarization parallel and perpendicular components expressed the different characteristics of the scenario. By extracting the parallel and perpendicular polarization feature images can obtain valuable information about the scenario.
Nonsubsampled shearlets transformation (NSST) is the most optimal multiscale decomposition method.20 It can more effectively express image details, such as lines, edges, and contours.21 Infrared intensity imaging and polarization imaging provide different yet complementary information about objects. By employing NSST, the respective extracted polarization parallel and perpendicular component and the infrared intensity images are fused. Their respective merits of infrared intensity and polarization images can be effectively combined. Experimental results demonstrate that the target is significantly enhanced in the fused image, which shows stronger contrast, more details, and an overall better visual effect.
Polarization Theory and Stokes Vector
A light wave can be decomposed into two components orthogonal to the propagation direction. The difference of the two orthogonal components results in the polarization of light. The state of polarization can be described by the Stokes vector, which includes four parameters , and .22 Generally, most polarization phenomena in nature are linearly polarized, and the component is so small that it is always ignored. The first three Stokes parameters can be derived using the different polarization directions, 0 deg, 60 deg, and 120 deg, as shown in Eq. (1)23
Extraction of Polarization Features Based on Orthogonal Difference
Algorithm of Orthogonal Difference
The infrared polarization generated by the infrared reflected effect and emitted effect is partially polarized. It can be decomposed into a natural light (unpolarized) component and a completely polarized component. Compared with the unpolarized component, the latter can provide more valuable information with respect to the detection and discrimination of the target
The image of orthogonality difference is defined by
As shown in Fig. 1(a), the infrared radiation of the target and background is partially polarized, where the degree of polarization and AOP are 0.33, 30 deg, and 0.25, 90 deg, respectively. They show a large overlap area, and it is difficult to separate the target from the background. In Fig. 1(b), after decomposing partially polarization, the completely polarized component can be obtained, where the polarization characteristics of objects can be obtained. Figure 1(c) shows the orthogonality difference result. The target and background are distinctly separated, and it is easy to extract the respective features of the target and background polarization in this situation.
Method of Polarization Feature Extraction
Using Eq. (7), we can obtain only polarization features in a single direction. Because the distribution of the surface orientations of the target and background is random and irregular in nature, it is difficult to obtain all valuable information using the polarization feature in only one direction. To obtain all directional polarization features, an algorithm of the weighted addition is proposed as follows:
Extraction of Polarization Features in Different Directions
According to the energy conservation law and Kirchhoff’s law, when the transmission is negligible, the sum of reflection rate and emission rate is equal to one19
According to the above analysis, we simulated reflection and emission polarizations of ideal metal steel plates, respectively. The parallel and perpendicular polarization states are, respectively, perpendicular and parallel to the plane of incidence defined by the propagation vector of the detected radiance and surface normal.19,2728.–29
The simulation results in Fig. 2 show that, in the case of reflected radiation, the perpendicular reflection rate (-reflectivity) is greater than the parallel reflection rate (-reflectivity), and the polarization is perpendicularly polarized. For emission radiation, the parallel emission rate (-emissivity) is greater than the perpendicular emission rate (-emissivity), and the polarization direction is predominantly parallel. Therefore, the respective polarization parallel and perpendicular components express the different characteristics of the scenario. By setting weighted coefficient , we extract the polarization parallel feature image and perpendicular feature image, respectively.
1. Extraction of polarization parallel component
The image of the polarization parallel component can be extracted by setting the weighted coefficient , where is absolute value operation, especially if , and ,
2. Extraction of polarization perpendicular component
The image of the polarization parallel component can be extracted by setting the weighted coefficient , especially if , and ,
Experiment on Extraction of Polarization Features
To validate the enhancement effect of extracting polarization features by the orthogonality difference method, two typical scenarios were recorded. The target shown in Fig. 3 is a car in an outdoor area; it was selected because the car target has rich details. The target in Fig. 4 is a house in a forest; it was selected because the vegetation background is complex.
With the use of a long-wave infrared polarization detection system, we obtained infrared intensity images in the directions of 0 deg, 60 deg, and 120 deg. Owing to the block effect of the polarizer, the contrast of the target with respect to the background in different polarizing angle decreased slightly in the polarization images compared with the original infrared intensity image. According to Eqs. (1) and (2), the degree of polarization (DOLP) images and the AOP images can be obtained, as shown in Fig. 5.
As shown in Fig. 5(a), the degree of polarization of the car window is stronger than that of other objects. In Fig. 5(b), the polarization angle of the car wheels is different from that of other objects. Figures 5(c) and 5(d) show the degree of polarization and the AOP of a house. Because the degree of polarization and the AOP are related to the surface viewing angle, it is evident that the roof and different walls of the house have different states of polarization, which are different from that of the surrounding vegetation. Moreover, because the AOP is sensitive to noise clutter, it has a worse visual effect. Therefore, we used the method of orthogonal difference to extract the polarization parallel features and perpendicular features of the above scenarios.
Figure 6(a) shows the polarization parallel component image of a car in an outdoor area. Compared with the image of the degree of polarization, the image of the polarization parallel component has richer details and information. Figure 6(b) shows the polarization perpendicular component in this scenario. It is apparent that the plant and ground polarization is focused in the perpendicular direction, and the car window contrast is strong. Figures 6(c) and 6(d) show the polarization parallel component and perpendicular component of a house in a forest, respectively. In the parallel direction, the house is highlighted and background vegetation is suppressed. In the perpendicular direction, the vegetation and road have strong polarization. The road, in particular, cannot be observed in the infrared intensity image; however, it is distinct in the perpendicular component image.
The above results indicate that the polarization features in both parallel and perpendicular directions can be effectively extracted using the proposed method of orthogonal difference. Furthermore, the method is helpful in obtaining more information about the target and background. Moreover, noise in the polarization feature image extracted using the method is less than that in the polarization angle image, and the edges are more distinct. Thus, the image extracted by the proposed method can be fused to enhance the target.
Fusion Strategy of Intensity Images and Polarization Images
A polarization image reflects the types of material, surface roughness, and detection angle, whereas the infrared intensity image reflects the radiation capability. Owing to the respective complementary attributes of these two image types, the NSST algorithm is applied to fuse them and thereby integrate their individual merits.2130.–31
As shown in Fig. 7, the classic fused strategy is adopted to fuse the extracted polarization parallel component image, extracted polarization perpendicular component image, and infrared intensity image.32
Low-Frequency Coefficients of Fused Image
The decomposed low-frequency components are appropriate parts of the image and reflect the energy distribution. The average rule is used to fuse the low-infrequency components
Fusion Strategy for High-Frequency Components
The high-frequency components reflect the details, textures, and edges of the image. The fusion rule of selecting the larger absolute is applied to fuse the high-frequency components to simultaneously preserve the details of the infrared intensity and polarization images.
Considering the problem of noise in polarization images, we use an adaptive adjustment coefficient, , to optimize the energy distribution and denoise the high-frequency components of the polarization feature images
The high-frequency coefficients of fused image are calculated by
Fusion Result with Polarization Features
Analysis of Fusion Result with Polarization Features
Figures 8(a) and 8(c) are the original infrared intensity images, whereas Figs. 8(b) and 8(d) are the images fused with the NSST algorithm. Figure 8(b) shows that the image fused with polarization features has richer details. In addition, it can be observed that the car windows are enhanced and the contrast of the stairs with respect to the ground is stronger than in the original intensity image. Figure 8(d) shows that that house and road targets are significantly enhanced. This is especially the case for the road in front of the house; it is embedded in the vegetation but can nevertheless be clearly viewed in the fused image. After fusing the polarization features, the contrast of the house target with regard to the vegetation background is improved.
The evaluation indices of the target contrast () with regard to the background (the ratio of average gray value of car/house to that of background), average gradient (AG), and image entropy ()33,34 for quantitatively evaluating the performances of the fused images are shown in Table 1.
Comparison of evaluation indices between infrared images and fused images.
|Types of scenarios||Evaluation index||0-deg image||60-deg image||120-deg image||Intensity images||Fused images|
|Car in outdoor area||C||0.0132||0.0165||0.0148||0.0473||0.0587|
|House in a forest||C||0.0753||0.1089||0.0896||0.1912||0.3583|
Bold values show the maximum of evaluation index.
The evaluation results demonstrate that, compared with the infrared intensity images, every fused image index is significantly improved. This is because the fused image combines the information of polarization features. Accordingly, the target is highlighted, the respective details in the intensity image and polarization feature images are retained, and the overall visual effect is enhanced.
Comparison and Analysis Using Existing Methods
Comparison of evaluation indices using different fused methods.
|Types of scenarios||Evaluation index||Original intensity image||Method in Ref. 16||Method in Ref. 17||Proposed method|
|Car in outdoor area||C||0.0473||0.0492||0.0485||0.0587|
|House in a forest||C||0.1912||0.2893||0.2617||0.3583|
Bold values show the maximum of evaluation index.
It is clear that the proposed method achieves a better visual effect and higher contrast of the target in relation to the background. Although the fused method in Ref. 16 can enhance the image quality and produce a higher contrast ratio than the original intensity image, the fused images are affected by the random noise of polarization degree image. The fused method in Ref. 17 can integrate the information of the intensity image and polarization degree image; nevertheless, it fails to improve the contrast ratio. The limitation of the two existing fusion methods is that the polarization difference mechanism is not thoroughly explored, which limits improvement in the contrast and image quality.
Our proposed method, on the other hand, fully explores the infrared polarization principle and polarization imaging mechanisms. It can thereby effectively extract polarization features in both parallel and perpendicular directions. By employing NSST, the respective extracted polarization parallel and perpendicular component images and the infrared intensity image are fused. Experimental results demonstrate that the contrast, average gradient, and image entropy of the fused image by the proposed method are higher than those of the other methods, thus validating its effectiveness in enhancing targets with the polarization features extracted using the orthogonality difference method.
Polarization imaging can reflect the inherent information of an object, such as the type of material and surface roughness, and different objects show different characteristics of polarization. The use of polarization features can enhance the target detection performance. In this paper, an orthogonal difference method based on the infrared reflected radiation and emitted radiation effects was proposed to extract the polarization features in different directions. Experimental results demonstrated that the proposed method effectively extracted respective polarization parallel and perpendicular features, which outperformed the degree of polarization and the AOP. By employing the NSST algorithm, we fused the intensity image, polarization parallel component image, and perpendicular component image. The fusion result demonstrated that every image evaluation index, including the target contrast with respect to the background, average gradient, and image entropy, was remarkably improved. This finding verifies that the proposed orthogonality difference method has better adaptability and usefulness in extracting features of polarization, and the resulting fused polarization image is effective in enhancing the information of target scenarios.
This paper was supported by the Natural Science Foundation of China (61302145) and Preresearch Fund for Weapons and Equipment (9140C800302KG01).
JingHua Zhang received his BE degree in electrical engineering and its automation from Huazhong University of Science and Technology, in 2016. Currently, he is an ME student in the ATR Laboratory, NUDT. His research interests are infrared image processing and automatic target recognition.
Yan Zhang is a professor at the ATR Key Laboratory, National University of Defense Technology (NUDT). She received her BS and MS degrees in physics from NUDT in 1997 and 2003, respectively, and her PhD in 2008. She is the author of more than 30 journal papers. Her current research interests include optical image processing. She is a member of SPIE.
ZhiGuang Shi received his BE degree in automatic control from Shijiazhuang Mechanical Engineering College, Shijiazhuang, China, in 1996 and his ME and PhD degrees in information and telecommunication systems from NUDT, Changsha, China, in 2002 and 2007, respectively. His research interest includes ladar and infrared image processing, radar clutter modeling, and statistical analysis.