**error**=

**0.51**and highest peak signal-to-noise

**ratio**=

**117.59**) to the images captured by the thermal imager. Therefore, the proposed model can effectively simulate infrared image textures at different distances.

## 1.

## Introduction

Infrared texture is an important feature in identifying scenery and has been used in various applications such as target detection, precision guidance, and three-dimensional scene simulation.^{1}2.^{–}^{3} Infrared texture generation has been studied for decades, but because of security considerations, progress on the topic was seldom reported in the public literature.

The few published papers on infrared texture reveal the two methods used to generate infrared image texture: infrared texture simulation based on visible light texture^{4}5.^{–}^{6} and the random field model.^{7}8.9.^{–}^{10} The former method uses Planck’s equation to calculate the infrared radiation energy for each object in the scene and then the energy value is mapped using a specific gray level. The deviation of the specific gray level is computed by the variations of gray in the visible image. The final infrared image texture is obtained using the specific gray level and its deviation. This simulation method can be adapted for a large-scale scene that needs only a low amount of detail, but it is not suitable for a scene that requires a high amount of detail because the infrared and visible textures have different principles of formation. The other simulation method based on a random field model, e.g., long correlation models^{7} and the Markov random field model,^{8}9.^{–}^{10} can generate infrared image texture. However, this method requires a large number of model parameter tests to determine the proper parameters, and this method is highly complex and has low fidelity.

To simulate infrared image texture at different distances, the simulated image is transformed by zooming in and out. The two simulation methods mentioned above do not take into consideration the attenuation of high frequency and the variation in the temperature difference of the scenery detail due to the atmospheric effect on transmission and different distances. The transformation of the simulated images obtained by the two methods discussed above is not reliable when the distance changes. Based on the image multiresolution pyramid principle, we propose an infrared image texture generation model based on scenery spatial frequency to generate infrared image texture at different distances. First, we calculate the scenery spatial frequency at a specific distance using the Nyquist frequency of the detector, and then we use the calculated scenery spatial frequency as the cut-off frequency to build a filter model based on distance. We use the filter to process the “zero”-distance infrared image texture captured by the thermal imager and downsample the filtered image. Second, given that the actual temperature difference corresponding to different scenery texture details will change with a change in distance due to the atmospheric transmission effect, we compare the changed temperature difference with the minimum resolvable temperature difference (MRTD) of the thermal image system. The comparative results are used to build a filter based on MRTD to decide whether the frequency should be recognized. Finally, after performing the above two steps of the filtering process, we obtain the final image texture.

Section 2 introduces the infrared image texture model based on scenery spatial frequency, Sec. 3 presents the experimental results and discussion, and Sec. 4 gives the conclusion of the paper.

## 2.

## Thermal Infrared Image Texture Generation Model Based on Scenery Spatial Frequency

## 2.1.

### Frequency Pyramid Principle of Imaging

An image pyramid is a series of images arranged in a pyramidal structure, which is effective in multiresolution image representation (Fig. 1). The size and resolution of the images gradually decrease from the bottom image to the top image of the pyramid. The size of the base layer $J$ (the original image) is $N\times N$ or ${2}^{J}\times {2}^{J}$, where $J={\mathrm{log}}_{2}\text{\hspace{0.17em}}N$. The size of peak layer 0 is $1\times 1$, i.e., a single pixel. The size of a layer $j$ is ${2}^{j}\times {2}^{j}$, where $0\le j\le J$. Therefore, a multiresolution pyramid is formed by starting with the $N\times N$ size of the original image and the image size of each successively smaller layer is an integral multiple of 2.

In a photoelectric imaging system with a fixed number of pixels, when the distances change, the imaging process becomes a series of multiresolution displays. Therefore, the generation of infrared image textures at different distances is equivalent to the formation of an image pyramid: the “zero”-distance infrared image is the bottom image in the pyramid and has the highest resolution. The effects of distance and atmospheric transmission on the scenery infrared textures are equivalent to low-pass filtering and the process of downsampling in the image pyramid. A series of infrared image textures of different sizes and resolutions can be obtained by repeated filtering and downsampling. The filters are based on distance and MRTD. All of the filtering processes act on the “zero”-distance infrared image.

## 2.2.

### Spatial Frequency Filter Based on Distance

The results of the scenery imaging on the detector are shown in Fig. 2, where $h$ and $w$ are the height and width of the scenery, O is the optic center, ${f}_{0}$ is the focal length of the infrared imaging system, and ${p}_{h}\times {p}_{w}$ and ${p}_{{h}^{\prime}}\times {p}_{{w}^{\prime}}$ are the image size at distance ${L}_{0}$ and $L$, respectively.

## 2.2.1.

#### Frequency filter model based on distance

For the infrared imaging system with a fixed number of pixels, the ability to distinguish scenery details decreases with increasing distance. The cut-off frequency ${D}_{L}$ is the frequency that the detector can distinguish at distance $L$. The cut-off frequency determines the level of detail of the scenery at $L$ and is calculated using $L$. Then a filter model based on the cut-off frequency is built and is used to process the “zero”-distance image. We call the filter a spatial frequency filter, denoted by ${H}_{S}$ and it is defined as

where ${f}_{L}$ is the spatial frequency of the “zero”-distance image and ${D}_{L}$ is the cut-off frequency of the image at distance $L$.We apply the Fourier transform $F(u,v)$ to the “zero”-distance image of size $M\times \mathrm{N}$:

where $f(x,y)$ is the gray value at ($x,y$) on the “zero”-distance image. Then, the filtered image $G(u,v)$ in the frequency domain is calculated using the following equation:The spatial-domain image ${g}_{p}(x,y)$ is obtained by using the inverse Fourier transform of $G(u,v)$ in the frequency domain:

where ${\zeta}^{-1}$ is the inverse Fourier transform.The image size $({p}_{h\u2019}\times {p}_{w\u2019})$ of the scenery at distance $L$ is determined by the relationship between the location of the scenery and the detector, as shown in Fig. 2. The ${g}_{p}(x,y)$ is filtered again using the downsampled window:

## 2.2.2.

#### Image cut-off spatial frequency based on distance

The horizontal and vertical sample frequencies, ${f}_{w}$ and ${f}_{h}$, respectively, of the detector are expressed as

where ${d}_{w}$ and ${d}_{h}$ are the width and height of the detector pixel. The imaging height (${p}_{h\u2019}$) and width (${p}_{{w}^{\prime}}$) on the detector at $L$ are expressed as where ${f}_{0}$ is the focal length of the infrared imaging system and $h$ and $w$ are the height and width of the scenery. The cut-off spatial frequencies of the image at $L$ are determined by the relationship between the scenery and the detector and are defined as follows: where ${D}_{Lh}$ and ${D}_{Lw}$ are the vertical and horizontal cut-off spatial frequencies of the image at $L$ and ${h}_{\text{dect}}$ and ${w}_{\text{dect}}$ are the height and width of the image on the detector plane.## 2.3.

### Thermal Infrared Image Texture Filter Based on MRTD

## 2.3.1.

#### Infrared image texture filter model based on MRTD

For scenery with a single spatial frequency $f$, such as a bar target, the atmospheric transmission affects the temperature difference between the target and the background. If the actual temperature difference is still greater than the $\mathrm{MRTD}(f)$ of the thermal imaging system after considering the atmospheric transmission, the thermal imaging system can distinguish the details of the frequency $f$. Otherwise, the details of $f$ will not be distinguished and the image will become blurry. This yields the following formula:^{11}

In reality, the scenery contains different levels of detail and the spatial frequency of the infrared image is a frequency range, not one fixed value. Therefore, it is necessary to calculate the actual temperature differences for the different spatial frequencies of the image at distance $L$. Comparing the actual temperature differences of different spatial frequencies and $\mathrm{MRTD}(f)$ is important to discriminate the details of the image with frequency $f$. If the thermal imaging system can distinguish the scenery details with a frequency $f$ at distance $L$, it needs to meet the following condition:

where $\mathrm{\Delta}T(f)$ is the mean temperature difference for frequency $f$ in the image, and $\tau (L)$ is as defined above and can be calculated using the program MODTRAN.^{11}A temperature filter model ${H}_{t}$ based on the MRTD, according to Eq. (12), is defined as

## (13)

$${H}_{\mathrm{t}}=\{\begin{array}{cc}1,& \mathrm{\Delta}T(f)\xb7\tau (L)\ge \mathrm{MRTD}(f)\\ 0,& \mathrm{\Delta}T(f)\phantom{\rule{0ex}{0ex}}\xb7\tau (L)<\mathrm{MRTD}(f)\end{array}\mathrm{.}$$We denote the Fourier transform of the filter result ${g}_{p}({x}^{\prime},{y}^{\prime})$ as ${G}^{\prime}(u,v)$ and use the filter based on MRTD to process it to obtain the final filtered image $R(u,v)$ in the frequency domain:

To obtain the filtered image in the spatial domain, the inverse Fourier transform is applied to $R(u,v)$:

where ${R}_{p}(x,y)$ is the final simulated image of the thermal infrared texture at distance $L$.## 2.3.2.

#### Model of relationship between frequency distribution and temperature difference of scenery

For the “zero”-distance infrared image ($L={L}_{0}$), we can determine the temperature range (${T}_{\mathrm{min}}$, ${T}_{\mathrm{max}}$) and can calculate the gray level range (${G}_{\mathrm{min}}$, ${G}_{\mathrm{max}}$). The relationship between temperature and the gray values can be approximated by a linear relationship in a particular temperature range.^{12} Therefore, the temperature $T$ in the “zero”-distance infrared image is defined as

## (16)

$$T=\frac{{G}_{\mathrm{max}}-{G}_{\mathrm{min}}}{{T}_{\mathrm{max}}-{T}_{\mathrm{min}}}\phantom{\rule{0ex}{0ex}}\xb7G+{T}_{\mathrm{min}},$$## (18)

$$\mathrm{\Delta}{T}_{\mathrm{avg}}({f}_{1})=\frac{{\sum}_{i,j=1}^{i,j=m,n}\mathrm{\Delta}{T}_{ij}}{mn},$$## (19)

$$T(i,j)=[T(2i-\mathrm{1,2}j-1)+T(2i-\mathrm{1,2}j)\phantom{\rule{0ex}{0ex}}+T(2i,2j-1)+T(2i,2j)]/4.$$The average temperature difference of the image corresponding to the highest spatial frequency ${f}_{2}$ is

## (20)

$$\mathrm{\Delta}{T}_{\mathrm{avg}}({f}_{2})=\sum _{i,j=1}^{i,j=\frac{m}{2},\frac{n}{2}}\mathrm{\Delta}{T}_{ij}/(mn/4)\mathrm{.}$$Similarly, we can calculate all average temperature differences that correspond to different highest spatial frequencies ${f}_{3}$, ${f}_{4},\dots $, then draw the fitting curve for $\mathrm{\Delta}{T}_{\mathrm{avg}}({f}_{i})$ and ${f}_{i}$ using discrete values of the highest spatial frequencies and the average temperature differences. In this work, we used the exponential function to simulate the relationship between $\mathrm{\Delta}{T}_{\mathrm{avg}}({f}_{i})$ and ${f}_{i}$:

where $a$, $b$, $c$, and $d$ are coefficients which are obtained by fitting curves of the relationship between the frequency distribution and the temperature difference of the scenery in the experimental step. Different “zero”-distance images have different coefficients.## 2.3.3.

#### MRTD of the thermal imaging system

The MRTD^{13} of the thermal imaging system is expressed as

## (22)

$$\mathrm{MRTD}(f)=\frac{{\pi}^{2}}{4\sqrt{14}}\frac{\mathrm{NETD}*{\mathrm{SNR}}_{T}}{\mathrm{MTF}(f)}{\left(\frac{\alpha \beta}{{\tau}_{d}{t}_{e}{f}_{p}\mathrm{\Delta}f}\right)}^{1/2},$$^{13}and is defined as where ${\mathrm{MTF}}_{o}$, ${\mathrm{MTF}}_{e}$, and ${\mathrm{MTF}}_{d}$ are the modulation transfer functions of the optical system, the electronic circuit, and the detector in the thermal imaging system, respectively. More details about the modulation transfer functions are given in Ref. 13.

## 3.

## Experimental Results and Discussion

We simulated the infrared image texture of scenery at different distances based on the “zero”-distance image. The “zero”-distance image was captured by the VarioCAMLong Wave Thermal Imaging System (InfraTec GmbH, Dresden, Germany). The parameters were as follows: $\text{resolution}=\phantom{\rule{0ex}{0ex}}240\times 320\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{pixels}$, wave $\text{band}=7.5$ to $14\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mu \mathrm{m}$, temperature detection $\text{range}=-40$ to 1200°C, $\mathrm{NETD}=0.08\xb0\mathrm{C}$, ${\mathrm{SNR}}_{T}=2.8$, ${f}_{p}=50\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{Hz}$, ${t}_{e}=0.2\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{s}$, ${\tau}_{d}=1/{f}_{p}$, and $\mathrm{\Delta}f=\phantom{\rule{0ex}{0ex}}\pi /(4*{\tau}_{d})$.

Two “zero”-distance images were collected on October 18, 2013, and were shown in Figs. 3(a) and 3(b). They were taken at 40 deg north latitude under a cloudy sky. In addition, there was haze that made visibility 0.5 km, and the atmospheric transmissivity was $<0.7$. Using the model of the relationship between the frequency distribution and the temperature difference of the scenery, we calculated five typical points of frequency and their corresponding average temperature differences. The fitting curves of the relationship are shown in Figs. 3(c) and 3(d). The coefficients of Eq. (21) for the curves in Figs. 3(c) and 3(d) are as follows: ${a}_{1}=0.1766$, ${b}_{1}=-0.8286$, ${c}_{1}=0.176$, ${d}_{1}=-0.03884$; ${a}_{2}=1.816$, ${b}_{2}=-0.6038$, ${c}_{2}=1.118$, ${d}_{2}=0.03606$.

The infrared image textures shown in Fig. 4(a) were simulated as follows. First, we determined the distance of the simulated image; we assumed that it was 5 m. Second, we applied the spatial frequency filter based on distance and downsampled the “zero”-distance infrared image [Fig. 4(a)] using Eq. (3); the experimental results are shown in Figs. 4(b) and 4(c) in frequency and spatial domains, respectively. Finally, we used the infrared texture image filter based on MRTD from Eq. (14) to process the filtered image shown in Fig. 4(c); the result is shown in Figs. 4(d) and 4(e).

We found that the image in Fig. 4(c) is fuzzier and smaller than that in Fig. 4(a), and the image in Fig. 4(e) is fuzzier than that in Fig. 4(c). Some details are attenuated because of the atmospheric transmission effect.

Figure 5 compares the simulated image with the infrared image captured by the thermal imager (real infrared image) when the subject was 5 m from the imager. To compare the two images directly and analyze the simulation, the simulated image was extended to the whole field of view. Both images [Figs. 5(a) and 5(b)] are relatively similar from a subjective point of view. The slight discrepancy between the two [Fig. 5(c)] is caused mainly by the nonconformity of scenery locations in the two images. The location of the object in the infrared image is not always just centered in the entire field of view, so in the simulation image, the nonconformity is caused.

Figure 6 shows the histograms^{14} of the infrared image captured by the thermal imager [Fig. 5(a)] and the simulated image [Fig. 5(b)]. Figures 6(a) and 6(b) are the whole histograms and Figs. 6(c) and 6(d) are the histograms in the gray-level range of 0 to 100 for the infrared image and the simulated image, respectively. The histograms in Figs. 6(a) and 6(b) have a peak value between 0 and 255 gray levels. The histograms in Figs. 6(c) and 6(d) show that the infrared image and the simulated image have similar distributions of gray levels.

The simulated images and real infrared images at 10, 15, and 20 m are presented in Fig. 7. The details of the simulated images and the real infrared images decrease with increasing imaging distance. The simulated image has a texture similar to that of the real infrared image when the imaging distances of the two images are the same.

Figure 8 presents the real infrared image [Fig. 3(b)] and the simulated images of the grass at different distances. The “zero”-distance captured image [Fig. 8(a)] is of a patch of grass 0.6-m wide and 0.45-m high. We used the proposed filter model to process the “zero”-distance infrared image at different distances to obtain the simulated infrared texture images. The simulated images should be the entire field of view, so the texture-matching technology based on the sample plot is adapted to each simulated image; the simulated results are shown in Figs. 8(b)–8(f). The figures show that as the distance increased, the details gradually became blurrier. These changes reflect the variations in the details of the scenery infrared texture at different distances.

Mean square error (MSE) and peak signal-to-noise ratio (PSNR) are often used as the evaluation indices^{15} to compare the similarity of two images. In general, if the $\mathrm{PSNR}>20$, there is a strong similarity between the two images.^{15} The similarity indices of the captured images and simulated images at different distances are presented in Table 1.

## Table 1

Indices used to evaluate the similarity of the captured images and the simulated images at different distances.

Distance (m) | Mean square error (MSE) | Peak signal to noise ratio (PSNR) |
---|---|---|

5 | 53.0952 | 71.1044 |

10 | 9.8621 | 87.9383 |

15 | 3.3287 | 98.7994 |

20 | 0.5087 | 117.5848 |

The results in Table 1 show that when the distance increases, the MSE decreases and the PSNR increases, indicating that the similarities increase as the distance increases. All the PSNR values in this study were greater than 20, so the captured image and simulated images are very similar when the distance is between 5 and 20 m. The small MSE values and the large PSNR values in Table 1 suggest that the proposed filter model has high fidelity and is valid.

This study has one limitation, i.e., the proposed model was tested on only two thermal images, that of the person and the grass. However, we limited the number of images for three reasons. First, the performance of the model in simulating scenery depends on the imaging distance and viewing direction, not on the object in the scenery. Second, the experimental images of the person and the grass show the degradation of the image and the variation in the texture detail that occur when the imaging distance changes. We verified with the two experimental images that the proposed model is valid for the scenery simulation in which the “zero”-distance infrared image of the scenery is obtained by the perpendicular shoot to the scenery (i.e., the grass was shot from above, whereas the person was shot horizontally), but it is not valid for the scenery simulated from different viewing directions. Therefore, we did not use more images to test the model from the vertical direction to the scenery. Third, experimental conditions that were more complex and more materials would have been necessary to capture additional thermal infrared images at different viewing directions and distances, e.g., we may have had to use unmanned drones. Therefore, when the experimental conditions are appropriate, we will consider capturing more images for our future work.

## 4.

## Conclusion

Based on the principle of the multiresolution image pyramid, we proposed a new thermal infrared image texture generation model based on scenery spatial frequency. The model was based on a “zero”-distance infrared image. Two typical sceneries were simulated using the model, and the simulations were compared with the infrared image texture captured by a thermal imager. The experimental results validated the proposed model by showing that it can reflect the features of infrared image texture and the imaging principle at different distances. In conclusion, the proposed model is able to effectively simulate infrared images with textures on the large-scale background and can meet some of the requirements of qualitative analysis. In the future, we will capture and simulate sceneries from different directions and distances and use them to improve the robustness of the proposed model.

## Acknowledgments

This research was supported by the National Ministries Pre-research Project under grant No. 110010202.

## References

M. S. Allili, N. Baaziz and M. Mejri, “Texture modeling using contourlets and finite mixtures of generalized Gaussian distributions and applications,” IEEE Trans. Multimedia 16(3), 772–784 (2014).ITMUF81520-9210http://dx.doi.org/10.1109/TMM.2014.2298832Google Scholar

A. Klein et al., “Incorporation of thermal shadows into real-time infrared three-dimensional image generation,” Opt. Eng. 53(5), 053113 (2014)OPEGAR0091-3286http://dx.doi.org/10.1117/1.OE.53.5.053113Google Scholar

X. Zhang, T. Z. Bai and F. Shang, “scene classification of infrared images based on texture feature,” Proc. SPIE 7156, 715626 (2009).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.806945Google Scholar

X. P. Shao, J. Q. Zhang and J. Xu, “Study of modeling natural infrared textures,” J. Xi’an Univ. 30(5), 612–617 (2003).http://dx.doi.org/10.3969/j.issn.1001-2400.2003.05.010Google Scholar

S. Chen and J. Y. Sun, “IR scene simulation based on visual image,” Infrared Laser Eng. 38(1), 23–30 (2009).1007-2276http://dx.doi.org/10.3969/j.issn.1007-2276.2009.01.005Google Scholar

S. Chen et al., “A new infrared texture generation method,” J. Dalian Marit. Univ. 36(4), 103–106 (2010).Google Scholar

J. Bennett and A. Khotanzad, “Modeling texture images using generalized long correlation models,” IEEE Trans. Pattern Anal. Mach. Intell. 20(12), 1365–1370 (1998).ITPIDJ0162-8828http://dx.doi.org/10.1109/34.735810Google Scholar

R. Chellappa, S. Chatterjee and R. Bagdazian ”Texture synthesis and compression using Gaussian-Markov random field models,” IEEE Trans. Syst. Man Cybern. SMC-15(2), 298–303 (1985).ITSHFX1083-4427http://dx.doi.org/10.1109/TSMC.1985.6313361Google Scholar

X. P. Shao et al., “Infrared texture simulation using Gaussian-Markov random fields,” Int. J. Infrared Millimeter Waves 25(11), 1699–1710 (2004).IJIWDO0195-9271http://dx.doi.org/10.1023/B:IJIM.0000047459.74083.fdGoogle Scholar

X. P. Shao, C. M. Gong and J. Xu, “Infrared texture simulation using non-parametric random field model,” Proc. SPIE 6787, 67871C (2007).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.749501Google Scholar

T. Z. Bai and W. Q. Jin, Principle and Technology of Optoelectronic Imaging System, pp. 509–518, Beijing Institute of Technology Press, Beijing (2006).Google Scholar

L. S. Zhang et al., “A radiometric calibration method of low temperature measurement about thermal infrared imager,” Chinese Patent No. 1,02,818,636 (2012).Google Scholar

T. Z. Bai, “Study of simulation and analogy of electro-optical imaging systems,” PhD Thesis, pp. 57–75, Beijing Institute of Technology, Beijing (2001).Google Scholar

J. H. Chang, K. C. Fan and Y. L. Chang, “Multi-modal gray-level histogram modelling and decomposition,” Image Vision Comput. 20, 203–216 (2002).IVCODK0262-8856http://dx.doi.org/10.1016/S0262-8856(01)00095-6Google Scholar

H. N. Li, “The study of digital 3D scene infrared imaging modeling and realization technology,” PhD Thesis, pp. 93–130, Beijing Institute of Technology, Beijing (2010).Google Scholar

## Biography

**Hai-He Hu** received her BS and MS degrees at the Electronic & Information Engineering School of the Henan University of Science and Technology in 2004 and 2007, respectively. She is now a PhD candidate in optical engineering at Beijing Institute of Technology. Her technical interests include infrared scene simulation, computer graphics, and image processing.

**Ting-Zhu Bai** received his PhD in 2001 and is currently a professor at the School of Optoelectronics at Beijing Institute of Technology. His major research interests include infrared scene simulation and thermal imaging technology. He is a fellow of SPIE.

**Xiao-Xia Qu** received her BS degree in optical information science and technology from Wuhan University of Technology in 2009. She is a PhD candidate in optical engineering at Beijing Institute of Technology. From September 2012 to September 2014, she studied at Ghent University (Belgium) as a participant in the joint PhD program between Beijing Institute of Technology and Ghent University. Her research interests include infrared image processing and medical image processing.