8 December 2016 Research on fusion technology based on low-light visible image and infrared image
Author Affiliations +
Optical Engineering, 55(12), 123104 (2016). doi:10.1117/1.OE.55.12.123104
Abstract
Image fusion technology usually combines information from multiple images of the same scene into a single image so that the fused image is often more informative than any source image. Considering the characteristics of low-light visible images, this study presents an image fusion technology to improve contrast of low-light images. This study proposes an adaptive threshold-based fusion rule. Threshold is related to the brightness distribution of original images. Then, the fusion of low-frequency coefficients is determined by threshold. Pulse-coupled neural networks (PCNN)-based fusion rule is proposed for fusion of high-frequency coefficients. Firing times of PCNN reflect the amount of detail information. Thus, a high-frequency coefficient corresponding to maximum firing times is chosen as the fused coefficient. Experimental results demonstrate that the proposed method obtains high-contrast images and outperforms traditional fusion approaches on image quality.
Liu, Piao, and Tahir: Research on fusion technology based on low-light visible image and infrared image

1.

Introduction

Image fusion technology is aimed at obtaining one image with high quality from two more images of the same scene.1 Image fusion technology has been applied in various fields, such as remote sensing,2 medical diagnosis,3,4 face recognition,5 target detection,6,7 and art analysis.8 Fusion technology of infrared and visible images is an important branch in the image fusion field. An infrared image can display camouflaged objects under trees or smog. But its contrast is low and it is difficult to observe or recognize targets for the human vision system. On the contrary, a visible image usually contains much more texture and color information of the scene. But there are many dark regions in low-light visible images. Objects are fuzzy and difficult to be distinguished in these regions. However, fused images can effectively combine advantages of two images that can extend and enhance image information. Thus, it is necessary to study fusion technology of infrared and low-light visible images.

In the past few years, some techniques and software algorithms for image fusion have been developed.9,10 Generally, image fusion methods mainly include pixel-level fusion, feature-level fusion, and decision-level fusion. Compared to others, pixel-level fusion can better preserve original image information. In most pulse-coupled neural networks (PCNN)-based fusion algorithms, only the single pixel value is used to motivate a PCNN neuron. It is not effective enough because human eyes are usually sensitive to features. Qu proposed an orientation information motivated PCNN algorithm (OI-PCNN). Orientation information is considered as a feature to motivate PCNN.11 This algorithm can preserve spatial characteristics of source images well and only grayscale source images are considered in this method. Kong et al.12 proposed an adaptive fusion technique based on nonsubsampled contourlet transform and intensity–hue–saturation (IHS) transform. The pseudocolor principle is used in the fusion step. Fused images can obtain color information of reference images based on the pseudocolor principle. Results obtained show a good performance. But the algorithm costs too much time and pseudocolor information of targets may result in erroneous determination. Li et al.13 proposed a guided filtering-based fusion method (GFF). First, source images are decomposed into a base layer containing large scale variations in intensity and a detail layer capturing small-scale details. Then a weighted average technique based on guided filtering is chosen for fusion of base and detail layers. Zhou and Tan14 combined infrared and visible images using wavelet transform (WT). Li15 proposed an infrared and visible image fusion algorithm based on target segmentation. First, a segmentation algorithm is used to extract target of an infrared image. Then, the target is fused with a visible image. This method preserves target information of infrared effectively. But the other information of the infrared image is lost.

A fused image retains most desirable information and characteristics of infrared and visible images. Generally, grayscale visible images in daytime are mostly used in traditional methods. Moreover, registration problems of infrared and visible images are not considered. Therefore, traditional fusion methods are not suitable for fusion of infrared and low-light visible images. To overcome these problems, a fusion technology is presented in this paper. It can be widely applied in various fields, such as target recognition, medical diagnosis, transportation, intelligent transportation, and video surveillance.

Necessary background information is provided in Sec. 2. In Sec. 3, the proposed image fusion method is described. Moreover, improved fusion rules are introduced. Experimental results are presented and discussed in Sec. 4. Finally, the conclusion of this study is presented in Sec. 5.

2.

Background

Image fusion algorithms mainly include three categories, i.e., pixel-level fusion, feature-level fusion, and decision-level fusion. Compared with feature-level and decision-level fusion, pixel-level fusion directly combines the pixels of original images, which is applied more widely.

Multiscale transform methods have been demonstrated to be very useful in the pixel-level image fusion field. In previous research, pyramid methods and WT are most commonly used, such as Laplacian pyramid16,17 and discrete wavelet transform (DWT).1819.20 In addition to these, some multiscale transform methods are proposed, such as dual-tree discrete WT (DT-DWT),21 curvelet transform,22 contourlet transform,23 and so on. DWT or DT-DWT often suffers from limited directions and cannot represent edges of images. Moreover, the traditional DWT-based fusion method can introduce a blocking effect to fused image.

For representing images accurately, Candès and Donoho24 proposed curvelet transform. Unlike curvelet transform that first develops a transform in the continuous domain and then discrete for sampled data, contourlet transform starts with a discrete-domain construction directly.25 Contourlet transform is composed of Laplacian pyramid and directional filter bank (DFB). Images can be decomposed into multiscales by Laplacian pyramid. Then DFBs are carried out to obtain directional components. The low-frequency part is always obtained by Laplacian pyramid decomposition. In addition, the high-frequency part is that the original image subtracts the low-frequency part, which is through upsampling. The Laplacian pyramid decomposition process is as follows:

(1)

LPk={GkGk+1*N1k0Gkk=N,

(2)

Gk(m,n)=i=22j=22w(i,j)Gk1(2m+i,2n+j),

(3)

Gk*(m,n)=4i=22j=22w(i,j)Gk1(mi2,nj2),
where k represents the decomposition levels, (m,n) is the pixel position of k’th level image, and w(i,j) is the low-pass filter. When (mi)/2 and (nj)/2 are integer, Gk=Gk, otherwise Gk=0.

The LP decomposition at each level generates a downsampled low-pass version of the original and the difference between the original and the prediction, resulting in a bandpass image.25 And each bandpass image is further decomposed by an lj level DFB into 2lj bandpass directional images. For the traditional contourlet transform-based fusion method, low-frequency coefficients are fused by average rule and high-frequency coefficients are fused by absolute value choosing max rule. In general, it is difficult to obtain high-contrast and high-definition fused images.

3.

Proposed Method

Original images have been strictly matched in most image fusion algorithms. But the registration problem is important in the image fusion field. In addition, a low-light visible image has many dark regions. These dark regions result in reducing quality of the fused image. Furthermore, it is difficult to observe objects in infrared images captured directly. A low-light visible image should also be transformed to an intensity image in the fusion step.

To overcome these problems, the proposed fusion method mainly includes preprocessed sections (such as IHS transform, original images registration, and image inverse), contourlet transform, coefficients fusion, and inverse contourlet transform. Figure 1 shows the structure of the proposed image fusion method.

Fig. 1

Structure of proposed image fusion method.

OE_55_12_123104_f001.png

According to the structure of the proposed image fusion method, the main steps are as follows:

  • 1. Source images should be matched first. After that, matched images are enhanced to improve contrast. Then the luminance component of low-light visible image Iv is extracted using IHS transform.

  • 2. After preprocessing, infrared and luminance images (IIR and Iv are decomposed separately to low-frequency coefficients (IIRJ,IvJ) and high-frequency coefficients (IIRd,k,Ivd,k) using contourlet transform, respectively. J is the number of decomposition stages. k shows directional number at d’th scale (Jd1). Generally, more decomposition levels cost longer computation time. Therefore, two decomposition levels are chosen in this study. Generally, more directional subbands mean more high-frequency information. To reduce time consumption, two directional subbands are obtained in the first decomposition level and 16 directional subbands are obtained in the second level.

  • 3. Considering different characteristics of low- and high-frequency coefficients, different fusion rules are chosen. Low-frequency coefficients are fused by the adaptive threshold-based fusion rule. High-frequency coefficients are fused by the PCNN-based rule.

  • 4. Fused low-frequency coefficients and high-frequency coefficients (IFlow,IFhigh) are transformed into fused luminance image IF using inverse contourlet transform.

  • 5. Considering the characteristics of low-light visible images, the final fused image is obtained by combining the fused luminance image and low-light visible image.

3.1.

Low-Frequency Coefficients Fusion

Low-frequency coefficients mainly contain approximate characteristics of infrared and low-light visible images. First, threshold value is adjusted based on maximum of the illuminance image. Then, current pixel value of the illuminance image is used as the fused pixel if the difference is bigger than threshold through comparing. Otherwise, the average rule is used for low-frequency coefficients fusion.

Threshold adjustment is one key step in low-frequency coefficients fusion. Mostly, there are only small amounts of brightest pixel values in low-light visible images. In low-light visible images, few pixels have high intensity value. These high intensity pixels are mostly from background light, such as car light, lamp light, and other light-emitting devices. After two levels of decomposition, low-frequency coefficients (IvJ,IIRJ) are obtained. Through experiments, the top 0.13% largest values of coefficients difference are considered as background light components. Therefore, coefficients difference is sorted first. Then, the top 0.13% largest values are chosen to determine the threshold. In this paper, wth=0.75

(4)

Ith=wth×max(IvJIIRJ),

(5)

IFlow={IvJIvJIth(IIRJ+IvJ)/2otherwise,
where Ith is the threshold and wth is the threshold weight. IIRJ and IvJ are corresponding low-frequency coefficients.

3.2.

High-Frequency Coefficients Fusion

High-frequency coefficients mainly contain detail characteristics of infrared and illuminance images. It represents texture and edge information of original images. The rule of maximum absolute value is usually chosen in most traditional fusion methods. But fused image quality is often bad.

PCNN is a simplified neural network model, which is constructed by a plurality of interconnected neurons. Each pixel datum is one neuron in image processing and each neuron mainly includes dendritic branch, connector, and pulse generator sections. For high-frequency coefficients fusion, the mathematical model based on PCNN can be described as follows. High-frequency coefficients are used to motivate PCNN

(6)

Fi,jd,k(n)=Ii,jd,k,
where Fi,jd,k is an output of feeding part, Ii,jd,k is an input of feeding part, n shows iteration time, k is the decomposition level, d represents direction at kth level, and (i,j) shows pixel location.

Linking part is described as

(7)

Li,jd,k(n)=exp(aL)Li,jd,k(n)+VLm,nWij,mnYij,mnd,k(n1),
where Li,jd,k(n) is an output of linking part, m and n are the scope of the connected neurons, VL is a normalization coefficient, and Wij,mn is the weight to connect other neurons

(8)

Ui,jd,k(n)=Fi,jd,k(n)[1+βLi,jd,k(n)],

(9)

θi,jd,k(n)=exp(aθ)θi,jd,k(n1)+VθYi,jd,k(n1),
where Ui,jd,k(n) is an internal state, θi,jd,k(n) is a threshold, β, aθ, and Vθ are the constant parameters. In each iteration, output (also called firing time) is calculated as26

(10)

Yi,jd,k(n)={1Ui,jd,k(n)>θi,jd,k(n)0otherwise,

(11)

TX,ijd,k=n=1NYi,jd,k(n)X=vor  IR,
where X represents the low-light visible or infrared image and N is the total number of iteration times.

After N iterations, it is easy to obtain firing times of high-frequency coefficients. Then, fused high-frequency coefficients are obtained as

(12)

IFhigh={Ivd,kTv,ijd,kTIR,ijd,kIIRd,kotherwise.

4.

Experimental Results and Discussion

To verify practical value of the proposed method, fused image quality and fusion effect are analyzed in this section.

4.1.

Evaluation Criteria

Subjective criterion is mainly based on observation of human eyes. It is always hard to evaluate fused image quality. Objective criteria are based on characteristics of fused images and are often applied to evaluate image quality. To verify validity of the proposed method, subjective criterion and objective criteria are both analyzed in this section.

Mean value reflects average brightness of an entire image. Usually, it is hard to distinguish objects in a low luminance region for human eyes. Mean value represents average illumination of fused images. It is useful to evaluate the dynamic range of low-light fused images indirectly. For the human visual system, a high dynamic range of fused image brightness always indicates high definition. So larger mean value often shows better fused effect.

Standard deviation reflects gray-level distribution of a fused image. It describes the discrete degree between image pixels and mean value. Standard deviation is often used to evaluate contrast of fused images. The below equation shows the definition:

(13)

σfused=1m×ni=0m1j=0n1[p(i,j)μ]2,
where δfused is the standard deviation of the fused image. Size of the fused image is m×n. p(i,j) is the probability of the position (i,j). Mean value of the fused image is μ.

Entropy describes average information carried by a fused image. If entropy of a fused image is large, it indicates that there is a lot of information in the fused image. Definition is shown as follows:

(14)

Efused=j=0L1pjlog2pj,
where Efused is the entropy of the fused image and pj is the probability of pixel value j. L is 255, generally.

Average gradient is also called image clarity. Generally, a larger value depicts better image quality. Equation (15) shows the definition of average gradient27

(15)

gfused=1(m1)×(n1)i=1m1j=1n1[I(i+1,j)I(i,j)]2+[I(i,j+1)I(i,j)]22,
where gfused is the average gradient of the fused image and I(i,j), is the pixel value at (i,j). Size of fused image is m×n.

4.2.

Grayscale Images Fusion Analysis

To evaluate the proposed method, different fusion algorithms are chosen to compare. A fused image is mainly the fusion of intensity images and enhanced infrared images. Then, subjective criterion and objective criteria are used to analyze image quality. The first group experiment images are obtained from a website (Ref. 28). Figure 2 shows original images with resolution 640×480. From the above red rectangle, clear words can be seen in Fig. 2(a) and nothing in Fig. 2(b). However, from the left red rectangle, a hidden person appears in Fig. 2(b) and nothing in Fig. 2(a).

Fig. 2

Original images: (a) original low-light visible image and (b) original infrared image.

OE_55_12_123104_f002.png

There is different information in the original low-light visible image and infrared image observed in Fig. 2. To improve image quality, various methods are used to combine original images. Figure 3 shows fused images using different methods. Figure 3(a) represents the fused result using WT with two levels. Figure 3(b) shows a fused image using WT with four levels. Figure 3(c) is the fused result using traditional contourlet transform. Figure 3(d) shows a fused image of the OI-PCNN algorithm.11 The fusion result of the GFF algorithm13 is shown in Fig. 3(e). Figure 3(f) shows the fused result of the proposed method.

Fig. 3

Fused grayscale images: (a) WT with two levels, (b) WT with four levels, (c) traditional contourlet transform, (d) OI-PCNN algorithm, (e) GFF algorithm, and (f) proposed method.

OE_55_12_123104_f003.png

From Fig. 3, we can see that there is an obvious blocking effect in Figs. 3(a) and 3(b). And there is low contrast in Fig. 3(c). It is difficult to observe objects by human eyes. Figure 3(d) shows a number of fused errors in many areas, such as billboard and car lights. Figure 3(f) retains more information of original images than Fig. 3(e). The proposed method outperforms the OI-PCNN algorithm and GFF algorithm in comparison.

To analyze fused image quality more accurately, this study compares objective criteria of fused images in Table 1. The fused image of the OI-PCNN algorithm has the largest mean value and entropy value in Table 1. But from Fig. 3(d), we can clearly see that there are lots of fused errors, which impact on people’s subjective feelings seriously. Due to the effect of fused errors, it is difficult to observe objects for the human eye visual system. The fused image of the proposed method has a larger mean value and entropy than the others except for the OI-PCNN algorithm. Moreover, it has the largest standard deviation and average gradient. Its average running time is large but less than OI-PCNN. Considering all the characteristics, the fused image contains most detail information and highest contrast by using the proposed method than the others.

Table 1

Objective criteria of grayscale fused images.

MethodCriteria
MeanStandard deviationEntropyAverage gradientAverage running time
WT with two levels52.043822.72956.02193.09810.2473
WT with four levels52.087228.45946.33813.97040.3150
Traditioal contourlet transform51.901121.85085.96702.76710.8137
OI-PCNN algorithm1182.365735.72726.81813.38745.9449
GFF algorithm1370.844332.46606.53513.22440.9360
Proposed method74.072035.79556.71093.51115.7434

Note: Bold values are used to show the best quality of objective criterion.

4.3.

Color-Scale Images Fusion Analysis

In this section, a color-scale low-light image and infrared image are used for analysis. The original low-light visible image is captured by a Nikon camera. Moreover, the original infrared image is captured by a Xenics Bobcat-640 camera. Two group images are obtained at the south of the first school building in Changchun University of Science and Technology. Size of source image is 640×480.

From Fig. 4(a), we can clearly see that there are many dark areas. Opposite, many targets are clear in the infrared image as shown in Fig. 4(b). In the proposed image fusion method, source images are matched first. Then matched images are enhanced to improve contrast based on the human eye vision system. In addition, matched visible images should be transformed to obtain intensity images using IHS transform.

Fig. 4

Original images: (a) original low-light visible image and (b) original infrared image.

OE_55_12_123104_f004.png

Figure 5 shows a fused color image using different algorithms. In Fig. 5(a), the brightness of the fused color image is low. It is hard to observe the scene information for human eyes. Blocking effect reduces fused image quality in Fig. 5(b). In Fig. 5(c), image contrast still needs to be enhanced. In Fig. 5(d), there are obvious fusion errors in the sky area. In addition, there is a black edge around the lamps in Fig. 5(e). Moreover, brightness still needs to be improved. Figure 5(f) has better image quality than others subjectively. For example, the contour of cars is clearer than others, especially car wheels. In addition to that, there is still more edge information in the distant building and trees retained in Fig. 5(f) than others.

Fig. 5

Fused color image: (a) WT with two levels, (b) WT with four levels, (c) traditional contourlet transform, (d) OI-PCNN algorithm, (e) GFF algorithm, and (f) proposed method.

OE_55_12_123104_f005.png

Table 2 shows objective criteria of various methods. By comparison, the proposed method has better mean value, standard deviation, entropy value, and average gradient value than others. Generally, larger objective criteria reflect better fusion results. In addition, its average running time is large but less than OI-PCNN. From Table 2, we can see the proposed method has better fusion results of color images than others. Moreover, objective evaluation and subjective evaluation reach the same conclusion. It is easy to conclude that the proposed method outperforms the OI-PCNN algorithm and GFF algorithm.

Table 2

Objective criteria of fused color images (experiment #1).

MethodCriteria
MeanStandard deviationEntropyAverage gradientAverage running time
Original visible image37.776670.381312.25693.1906
WT with two levels99.7091118.173518.34486.36060.3171
WT with four levels100.3938120.825618.43257.02950.3818
Traditioal contourlet transform99.5396117.823618.37266.26230.9145
OI-PCNN algorithm11126.9540127.500119.67967.43688.0859
GFF algorithm13117.8568117.091619.37226.60161.1351
Proposed method179.2643158.063919.97719.07006.9468

Note: Bold values are used to show the best quality of objective criterion.

Figure 6 shows another group of source images. Both the low-light image and infrared image are 640×480. In Fig. 6(a), lots of targets are not clear (such as people or trees) in the low-light image. However, it is clear in the infrared image as we can see in Fig. 6(b).

Fig. 6

Original images: (a) original low-light visible image and (b) original infrared image.

OE_55_12_123104_f006.png

Figure 7 shows fused results using different methods. Figures 7(a) and 7(b) represent low contrast. Figure 7(c) has low brightness. From Fig. 7(d), we can see that there are many fusion errors. In Fig. 7(e), there are black pixels around lamps. In addition, it also has low brightness. Compared to others, the proposed method obtains high contrast and brightness as shown in Fig. 7(f).

Fig. 7

Fused color image: (a) WT with two levels, (b) WT with four levels, (c) traditional contourlet transform, (d) OI-PCNN algorithm, (e) GFF algorithm, and (f) proposed method.

OE_55_12_123104_f007.png

From Table 3, we can see that the proposed method has the largest mean value, standard deviation, and entropy. Moreover, it also has larger average gradient value than the GFF algorithm and its average running time is less than the OI-PCNN algorithm. Therefore the proposed method has better fusion results than others by comparing objective criteria. Experimental results show that the proposed method outperforms the OI-PCNN algorithm and GFF algorithm.

Table 3

Objective criteria of second fused color Images (experiment #2).

MethodCriteria
MeanStandard deviationEntropyAverage gradientAverage running time
Original visible image78.9970137.789915.62678.4931
WT with two levels159.7776177.998020.165213.16420.2925
WT with four levels160.0983180.952920.265313.99690.3323
Traditioal contourlet transform159.7050177.652820.171613.03530.9918
OI-PCNN algorithm11187.5629183.575621.146313.82706.2495
GFF algorithm13171.3346172.209720.837912.97820.8938
Proposed method296.6510186.417221.450513.67136.0357

Note: Bold values are used to show the best quality of objective criterion.

5.

Conclusion

In this paper, a fusion method of infrared and low-light visible images is proposed. First, original infrared and visible images are preprocessed. In addition, different fusion rules are chosen based on characteristics of low- and high-frequency information. The low-frequency component is fused using the adaptive threshold-based rule and the high-frequency component is fused based on PCNN. Moreover, the proposed method is also applicable for fusion of color visible images and infrared images. Finally, experimental results show that this method improves fused image quality through subjective and objective evaluations. Though the proposed method effectively improves image quality, some objects (such as people) are not clear enough. In the future, target extraction and enhancement can be introduced into the image fusion algorithm to improve fused image quality further.

Acknowledgments

This work was supported in part by grants from Project of Ministry of Science and Technology of People’s Republic of China (No. 2015DFR10670), Project of Jilin Province Development and Reform Commission (No. 2014Y109), and Projects of Jilin Province Science and Technology Department (Nos. 20140204045GX and 20140203014GX).

References

1. 

C. L. Yao et al., “Research of multi-sensor images based on color fusion methods,” J. Networks 8(11), 2635–2641 (2013).http://dx.doi.org/10.4304/jnw.8.11.2635-2641Google Scholar

2. 

B. Jin, G. Kim and N. I. Cho, “Wavelet-domain satellite image fusion based on a generalized fusion equation,” J. Appl. Remote Sens. 8(1), 080599 (2014).http://dx.doi.org/10.1117/1.JRS.8.080599Google Scholar

3. 

G. Bhatnagar, Q. M. J. Wu and Z. Liu, “Directive contrast based multimodal medical image fusion in NSCT domain,” IEEE Trans. Multimedia 15(5), 1014–1024 (2013).http://dx.doi.org/10.1109/TMM.2013.2244870Google Scholar

4. 

S. S. Bedi, J. Agarwal and P. Agarwal, “Image fusion techniques and quality assessment parameters for clinical diagnosis: a review,” Int. J. Adv. Res. Comput. Commun. Eng. 2(2), 1153–1157 (2013).Google Scholar

5. 

G. Bebis and I. Pavlidis, “Infrared and visible image fusion for face recognition,” Proc. SPIE 5404, 585–596 (2004).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.543549Google Scholar

6. 

F. Ouallouche et al., “Infrared and microwave image fusion for rainfall detection over northern Algeria,” Int. J. Image Graphics Signal Process. 6(6), 11–18 (2014).http://dx.doi.org/10.5815/ijigspGoogle Scholar

7. 

M. A. Smeelen et al., “Semi-hidden target recognition in gated viewer images fused with thermal IR images,” Inf. Fusion 18, 131–147 (2014).http://dx.doi.org/10.1016/j.inffus.2013.08.001Google Scholar

8. 

J. Coddington, “Image fusion for art analysis,” Proc. SPIE 5404, 585–596 (2011).http://dx.doi.org/10.1117/12.543549Google Scholar

9. 

S. Daneshvar and H. Ghassemian, “MRI and PET image fusion by combining IHS and retina-inspired models,” Inf. Fusion 11(2), 114–123 (2010).http://dx.doi.org/10.1016/j.inffus.2009.05.003Google Scholar

10. 

D. K. Sahu and M. Parsai, “Different image fusion techniques-a critical review,” Int. J. Mod. Eng. Res. 2(5), 4298–4301 (2012).Google Scholar

11. 

X. Qu, C. Hu and J. Yan, “Image fusion algorithm based on orientation information motivated pulse coupled neural networks,” in 7th World Congress on Intelligent Control and Automation (WCICA’08), pp. 2437–2441 (2008).http://dx.doi.org/10.1109/WCICA.2008.4593305Google Scholar

12. 

W. Kong, Y. Lei and X. Ni, “Fusion technique for grey-scale visible light and infrared images based on non-subsampled contourlet transform and intensity-hue-saturation transform,” IET Signal Proc. 5(1), 75–80 (2011).http://dx.doi.org/10.1049/iet-spr.2009.0263Google Scholar

13. 

S. Li, X. Kang and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process. 22(7), 2864–2875 (2013).http://dx.doi.org/10.1109/TIP.2013.2244222Google Scholar

14. 

Z. H. Zhou and M. Tan, “Infrared image and visible image fusion based on wavelet transform,” Adv. Mater. Res. 756–759(2), 2850–2856 (2014).http://dx.doi.org/10.4028/www.scientific.net/AMR.756-759.2850Google Scholar

15. 

W. L. Li, “Infrared and visible image fusion algorithm based on the target segmentation,” Comput. Simul. 31(11), 358–361 (2014).Google Scholar

16. 

U. S. Kumar, B. Vikram and P. J. Patil, “Enhanced image fusion algorithm using Laplacian pyramid,” Int. J. Eng. Sci. Res. 4(7), 525–532 (2014).Google Scholar

17. 

A. Sahu et al., “Medical image fusion with Laplacian pyramids,” in Proc. of IEEE Int. Conf. on Medical Imaging, M-Health & Emerging Communication Systems, pp. 448–453 (2014).http://dx.doi.org/10.1109/MedCom.2014.7006050Google Scholar

18. 

A. Krishn, V. Bhateja and A. Sahu, “Medical image fusion using combination of PCA and wavelet analysis,” in Int. Conf. on Advances in Computing, Communications and Informatics (ICACCI’14), pp. 986–991 (2014).http://dx.doi.org/10.1109/ICACCI.2014.6968636Google Scholar

19. 

Y. Yang et al., “Multi-focus image fusion using an effective discrete wavelet transform based algorithm,” Meas. Sci. Rev. 14(2), 102–108 (2014).http://dx.doi.org/10.2478/msr-2014-0014Google Scholar

20. 

H. Lin et al., “Remotely sensing image fusion based on wavelet transform and human vision system,” Int. J. Signal Process. Image Process. Pattern Recognit. 8(7), 291–298 (2015).http://dx.doi.org/10.14257/ijsipGoogle Scholar

21. 

J. Saeedi and K. Faez, “Infrared and visible image fusion using fuzzy logic and population-based optimization,” Appl. Soft Comput. 12(3), 1041–1054 (2012).http://dx.doi.org/10.1016/j.asoc.2011.11.020Google Scholar

22. 

G. M. Taher, M. E. Wahed and G. E. Taweal, “New approach for image fusion based on curvelet approach,” Int. J. Adv. Comput. Sci. Appl. 5(7), 67–73 (2014).Google Scholar

23. 

C. Zihong et al., “Visual and infrared image fusion based on contourlet transform,” in Proc. of Int. Industrial Informatics and Computer Engineering Conf. (IIICEC’15), pp. 123–126 (2015).Google Scholar

24. 

E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities,” Commun. Pure Appl. Math. 57(2), 219–266 (2004).CPMAMV0010-3640http://dx.doi.org/10.1002/cpa.v57:2Google Scholar

25. 

M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Trans. Image Process. 14(12), 2091–2106 (2005).IIPRE41057-7149http://dx.doi.org/10.1109/TIP.2005.859376Google Scholar

26. 

G. S. El-Taweel and A. K. Helmy, “Image fusion scheme based on modified dual pulse coupled neural network,” IET Image Proc. 7(5), 407–414 (2013).http://dx.doi.org/10.1049/iet-ipr.2013.0045Google Scholar

27. 

Y. Na and K. M. Deng, “Fusion of multi-focus images based on a combination of the weighting fractal features and average gradient,” Electron. Sci. Technol. 28(6), 68–71 (2015).Google Scholar

28. 

Y. Liu and Z. Wang, “Simultaneous image fusion and denoising with adaptive sparse representation,” IET Image Process. 9(5), 347–357 (2015).http://dx.doi.org/10.1049/iet-ipr.2014.0311Google Scholar

Biography

Shuo Liu is currently pursuing his PhD in information and communications engineering at Changchun University of Science and Technology, Changchun, China. His research interests include digital signal processing, image processing, and field programmable gate array technology.

Yan Piao received her PhD in digital signal processing from Chinese Academy of Sciences, Changchun Institute of Optics, Changchun, China, in 2000. She is a professor at the School of Electronics and Information Engineering, Changchun University of Science and Technology, Changchun, China. Her major research interests include digital signal processing, image processing, and three-dimensional imaging technology.

Muhammad Tahir received his BS degree in telecommunication engineering and his MS degree in electrical engineering from Government College University Faisalabad and COMSATS Institute of Information Technology Islamabad in 2008 and 2013, respectively. His major research interests are RF propagation and energy optimization in wireless sensor networks, wireless body area networks, and underwater wireless sensor networks. Currently, he is pursuing his PhD in telecommunication engineering from the School of Electronics and Information Engineering, Changchun University of Science and Technology, Changchun, China.

Shuo Liu, Yan Piao, Muhammad Tahir, "Research on fusion technology based on low-light visible image and infrared image," Optical Engineering 55(12), 123104 (8 December 2016). http://dx.doi.org/10.1117/1.OE.55.12.123104
Submission: Received ; Accepted
JOURNAL ARTICLE
9 PAGES


SHARE
KEYWORDS
Image fusion

Infrared imaging

Infrared radiation

Visible radiation

Image quality

Image enhancement

Optical engineering

Back to Top