Open Access
27 September 2016 Review on short-wavelength infrared laser gated-viewing at Fraunhofer IOSB
Author Affiliations +
Abstract
This paper reviews the work that has been done at Fraunhofer IOSB (and its predecessor institutes) in the past ten years in the area of laser gated-viewing (GV) in the short-wavelength infrared (SWIR) band. Experimental system demonstrators in various configurations have been built up to show the potential for different applications and to investigate specific topics. The wavelength of the pulsed illumination laser is 1.57  μm and lies in the invisible, retina-safe region allowing much higher pulse energies than for wavelengths in the visible or near-infrared band concerning eye safety. All systems built up, consist of gated Intevac LIVAR® cameras based on EBCCD/EBCMOS detectors sensitive in the SWIR band. This review comprises military and civilian applications in maritime and land domain—in particular vision enhancement in bad visibility, long-range applications, silhouette imaging, 3-D imaging by sliding gates and slope method, bistatic GV imaging, and looking through windows. In addition, theoretical studies that were conducted—e.g., estimating 3-D accuracy or modeling range performance—are presented. Finally, an outlook for future work in the area of SWIR laser GV at Fraunhofer IOSB is given.

1.

Introduction

A gated-viewing (GV) system consists of a pulsed laser illuminator and a synchronizable GV camera. After the laser pulse is emitted, the GV camera waits a predefined delay time until the detector elements integrate all photons that arrive within a very short integration time. Only laser photons that arrive from the corresponding range gate are collected; the fore- and the background are suppressed. The camera delay time determines the gate position and the integration time determines the gate length. The result is a range-gated image with a high target/background contrast as it can be seen in the right image of Fig. 1 for a vehicle at a distance of 480 m and a laser wavelength of 1.57  μm. For comparison, also a nongated, passive image of the vehicle in the wavelength region between 950 and 1650 nm in the short-wavelength infrared (SWIR: 1 to 3  μm) is shown at the left.

Fig. 1

Passive SWIR (a) and active SWIR GV image (b) of a vehicle at a distance of 480 m (M400).

OE_56_3_031203_f001.png

In Sec. 2, a brief historical overview of some conducted GV experiments in the past 40 years at Fraunhofer IOSB and its predecessor research institutes is given. The main Sec. 3 is divided into 11 sections giving a broad overview of the performed SWIR laser GV activities.

2.

Historical Overview

The very first experiments at the Fraunhofer IOSB predecessor research institute for optics (FGAN-FfO, Tübingen) in the area of laser GV were conducted in the mid 1970s. In 1976, a GEN-2 image intensifier was combined with a laser diode array emitting in the near infrared (NIR: 750 nm to 1  μm) at a wavelength of λ=852.5  nm. At the photocathode of the image intensifier, the incoming NIR photons are converted into electrons, which are accelerated by a high-voltage through a microchannel plate in which they are multiplied onto a phosphor screen. By exact control of the high-voltage timing parameter, only photons from a predefined distance range—the so-called “gate”—reach the screen and a range-gated image is obtained. In Fig. 2, two NIR GV images using this first demonstrator system are shown. In the left image, the gate was set at the same range as a vehicle and a bar target in the scene. Thus, a NIR image of these objects is produced. In the right image, the gate was set behind the vehicle and the bar target at the background. So, a silhouette image is obtained.

Fig. 2

Range-gated images of a scene with a vehicle and a bar target obtained by the GV system demonstrator operating in the NIR spectral range with different gating parameters. (a) Gate at the target range showing the vehicle. (b) Gate behind the target range at the background showing the silhouette of the vehicle.

OE_56_3_031203_f002.png

In 1985, also the midwavelength infrared (MWIR: 3 to 5  μm) spectral band was studied for active imaging. For these investigations, only spectral filtering was applied instead of range-gating due to the lack of a gating possibility for available MWIR cameras. As illumination source a deuterium fluoride (DF) laser with a wavelength of λ=3.8  μm was used. In Fig. 3, images of the hot flame of a camping stove with a text board behind using an indium antimonide (InSb) camera are shown. The left one is a passive image without spectral filtering. Only the hot flame can be seen. In the right image, the DF laser is illuminating the text board through the flame and a very narrow band-pass filter with a high transmission at the laser wavelength is mounted in front of the detector. Due to the active laser illumination and the spectral filtering, the text can be recognized through the hot flame.

Fig. 3

MWIR images of an InSb camera showing a scene with a very hot flame of a camping stove and a text board behind. (a) Without laser illumination and without spectral filtering. (b) With active illumination using a DF laser with a wavelength of λ=3.8  μm and with narrow band-pass filtering at the laser wavelength.

OE_56_3_031203_f003.png

In the mid 1990s, the U.S. company Intevac Photonics opened the SWIR spectral region for imaging by developing the laser illuminated viewing and ranging (LIVAR®) systems based on indium gallium arsenide (InGaAs)/indium phosphide (InP) transferred electron (TE) photocathodes and electron bombarded charge-coupled devices (EBCCD), or later electron bombarded complementary metal-oxide semiconductors (EBCMOS). This InGaAs/InP photocathodes are sensitive in the SWIR spectral band between 950 and 1650 nm with quantum efficiency >20% at 1550 nm. Concerning eye safety, this development was an important breakthrough in active imaging due to the possibility to illuminate the scene with lasers emitting in the so-called retina-safe wavelength region beyond 1400 nm.

In Fig. 4, the nominal ocular hazard distance (NOHD), which is the safety distance for ocular exposure, and the extended NOHD, which is the safety distance for viewing through optical aids like googles, are plotted versus the laser wavelength for a pulsed laser with a pulse energy Ep of 50 mJ, a beam diameter db of 8 mm, a full-angle beam divergence φb of 4.4 mrad, a pulse repetition frequency PRF of 10 Hz, and a pulse duration τp of 10 ns.

Fig. 4

Safety distance versus laser wavelength for the laser parameter in the plot title. Blue solid line: NOHD. Red dashed line: extended NOHD. (Calculated with LaserSafe PC Professional Edition Version 4.00).

OE_56_3_031203_f004.png

The advantage of the SWIR compared to the NIR spectral band with regard to eye safety can be clearly seen in Fig. 4. At a wavelength of 1500 nm, the NOHD and the extended NOHD are reduced by factors of 338 and 207, respectively, compared to a wavelength of 800 nm: NOHD=10153  m and extended NOHD=726035  m.

Since the year 2000, active imaging in the SWIR spectral band has become an important research area at the Fraunhofer IOSB Predecessor Research Institute FGAN-FOM due to the military potential and the commercial availability of the first LIVAR® camera model 120. During the first years of the new millennium, a SWIR laser GV system demonstrator was built up by using this LIVAR® M120 camera for GV imaging and a Raman shifted Nd:YAG laser with a wavelength of 1.54  μm for illumination. Until today, also the improved successor cameras LIVAR® M400 and LIVAR® M506 were used at Fraunhofer IOSB for range-gated SWIR imaging in combination with an OPO shifted Nd:YAG laser with a tunable SWIR wavelength or a fixed wavelength of 1.57  μm. The most relevant parameters of these GV cameras are listed in Table 1.

Table 1

Most relevant parameter of the three GV cameras LIVAR® M120, LIVAR® M400, and LIVAR® M506, which were used at Fraunhofer IOSB for range-gated SWIR imaging.

LIVAR® M120LIVAR® M400LIVAR® M506
TypeEBCCDEBCMOS
PhotocathodeInGaAs/InP TE
Spectral response950 to 1650 nm
Quantum efficiency20% at λ=1.55  μm
Dark current<1  electron/(pixel·μs) at room temperature
Number of detector elements512×512640×4801280×1024
Typical image resolution512×512640×480640×480 (2×2 binning mode)
Size of detector elements13  μm×13  μm12  μm×12  μm6.7  μm×6.7  μm
Limiting resolution (lp/mm)3228
Maximal frame rate (Hz)1728.530
Digital video output depth10 bit
Dynamic range48 dB
Gate delay step size5 ns
Minimal gate width (ns)15070

The findings and results of different SWIR GV experiments, which were conducted at Fraunhofer IOSB during the last decade will be reviewed in 11 sections of the following section. The advantages and drawbacks compared to other sensors will be shown. The camera model, which was used for the shown GV images, is given in the corresponding figure caption denoted in brackets (M120/M400/M506).

3.

Review of SWIR Laser GV Activities

3.1.

Improving Vision in Poor Visibility

The characteristic of GV imaging is the possibility to suppress the fore- and background of an object of interest. On the one hand, by suppressing the background of an object, one obtains a much higher target/background contrast than for a nongated image (compare Fig. 1). On the other hand, backscattered photons from particles in the atmosphere between sensor and object are not captured yielding a great potential for GV imaging in poor visibility. The left image in Fig. 5 shows an urban scene during dusk and natural haze in the visible spectral band (VIS: 380 to 750 nm). The middle image is a magnification of the left one and shows a house 2 km away at the opposite hill with a very low contrast. The right image is a GV image with a gate from 1800 to 2050 m. It was captured synchronously with the visual image.

Fig. 5

Urban scene in poor visibility due to dusk and natural haze. (a) Visual image. (b) Magnification of left image showing a house at the opposite hill with very low contrast. (c) Synchronously captured GV image (M506) of this house with a much higher contrast due to laser illumination and range-gating.

OE_56_3_031203_f005.png

The house can be recognized in the SWIR GV image much easier than in the visual image due to a significant higher contrast resulting from laser illumination and range-gating. Compared to the SWIR range, a greater part of the visible photons from the house are absorbed or scattered by water drops in the atmosphere and a smaller amount reaches the VIS camera. For the SWIR GV image, all laser photons backscattered from the atmosphere up to 1800 m are suppressed and only the laser photons backscattered from the range gate are collected by the GV camera.

From a military point of view, the improvement of vision in poor visibility due to artificial smoke is of also strong interest. In the scenario in Fig. 6, smoke was produced by a military smoke grenade at a distance of 1000 m. This smoke grenade was specifically designed to have an effect in the visible spectral range. A vehicle and a bar target have been positioned at a distance of 1640 m. The range gate of the GV system was set to a length of 24 m and centered to the target range (1628 to 1652 m).

Fig. 6

Scenario with smoke grenade at 1000 m and targets at 1640 m. (a) Visual image. (b) Three GV images (M120) at different times with a gate from 1628 to 1652 m.

OE_56_3_031203_f006.png

The targets can only be partially seen in the GV images if the density of the smoke is low enough. For dense smoke, all laser photons are scattered and absorbed within the smoke cloud and no laser photons from the range gate can reach the detector. For passive MWIR or LWIR images, there is nearly no impact by the smoke (MWIR see Fig. 7(b)) due to the significant higher transmittance of the longer wavelength compared to the SWIR. Thus, passive thermal IR cameras should be the preferred sensors here. However, there are also smoke grenades, which are explicitly designed for the thermal infrared region. In a static scenario, the GV image quality can be clearly improved by averaging consecutive frames of a GV image sequence [Fig. 7(a)].

Fig. 7

(a) Frame average of 20 consecutive frames of a GV image sequence in the smoke grenade scenario (M120). (b) MWIR image of the same scenario showing much less degradation by the smoke.

OE_56_3_031203_f007.png

In addition to this situation with smoke from a typical military smoke grenade, we have also studied the potential of GV in a scenario with a person standing in the midst of artificial fog from a fog machine at a distance of 117 m. The fog fluid is a mixture of polyhydric alcohols and water. In Fig. 8, a visual image of the person in the fog is shown (a) together with the synchronously captured GV image (b) with a gate from 96 to 126 m.

Fig. 8

Person standing in the midst of fog from a fog machine. (a) In the visual image, only the top part of the person’s head can be seen; the rest is completely obscured by the fog. (b) In the synchronously captured GV image (M506), the person can be clearly recognized.

OE_56_3_031203_f008.png

The person in the fog can be clearly seen in the GV image due to the high reflectance of the clothes in the SWIR band. The skin has a very high absorption at λ=1.57  μm and thus appears nearly black (compare Sec. 3.7). If the density of the fog gets too high like in the center of the GV image, too much laser photons are backscattered from the fog itself. So, in opposite to the situation of Fig. 6, the fog in front of the person is imaged resulting in a low person contrast. For passive MWIR or LWIR images, there is nearly no impact by the fog due to the significant higher transmittance of the longer wavelength compared to the SWIR. Thus, again passive thermal IR cameras should be the preferred sensors here.

In a further scenario, three oil tanks with burning diesel fuel were deployed at a distance of 150 m and the target vehicle was positioned at a distance of 450 m [Fig. 9(a)]. The range gate of the GV system was set to a length of 27 m and centered to the target range (437 to 464 m).

Fig. 9

Scenario with burning diesel fuel at 150 m and a target at 450 m. (a) Visual image. (b) Three GV images (M120) at different times with a gate from 437 to 464 m.

OE_56_3_031203_f009.png

The burning diesel was producing very hot flames and a dark smoke with carbon particles. On the one hand, these carbon particles strongly attenuate the VIS and SWIR radiation. Therefore, no laser photons are transmitted to the target and can reach the sensor from the range gate. This attenuation can be observed in the most left GV image in Fig. 9(b). On the other hand, the very hot flames of the burning diesel fuel cause severe turbulences due to significant thermal gradients. In the middle and right GV image of Fig. 9(b), the resulting scintillation and defocusing effect can be observed, respectively. Passive thermal cameras are no alternatives in this scenario because MWIR and LWIR images are partially saturated and dazzled due to the hot flames and gases. Again in a static scenario, the GV image quality can be clearly improved by averaging consecutive frames of a GV image sequence (Fig. 10).

Fig. 10

Frame average of 20 consecutive frames of a GV image sequence in the burning diesel fuel scenario (M120).

OE_56_3_031203_f010.png

There can also be poor visibility due to pyrotechnics such as those that are illegally burned in soccer stadiums by “ultras” (fans prone to extreme behavior). Especially for police forces and security guards in stadiums, it is extremely important to be aware of the situation. They have to know what potential troublemakers are doing behind pyrotechnical dazzle and smoke to prevent injuries, e.g., due to the very high temperatures of burning Bengal lights. Different pyrotechnics producing bright light effects and colored smoke have been investigated in terms of their influence on different sensors, including GV. As an example, one of these investigations is shown in the following. At a distance of 450 m, two persons were burning Bengal lights and swinging them back and forth, as is typical in fan curves. This experiment was conducted at night time. Figure 11 shows the resulting images captured synchronously with a visual camera, a MWIR camera, and a GV system.

Fig. 11

Synchronously captured images of a scene with two persons burning Bengal lights at a distance of 450 m. (a) Visual image. (b) MWIR image. (c) GV image (M506).

OE_56_3_031203_f011.png

In the visual image of the scene (a), the center of the Bengal lights appears completely white because of saturation of the sensor pixels due to the dazzling light. Additionally, this bright light illuminates the surrounding smoke and creates an efficient obstruction of the view behind the Bengal lights. The saturation effect occurs also in the MWIR image of the scene (b) due to the very high temperature of the flares. Further, the saturated area around the left Bengal light is even larger than in the visual image due to hot gases. Because the line-of-sight of the MWIR camera is slightly different compared to the visual image, the right Bengal light is behind the person and the saturated area appears smaller. For the GV image of the scene (c), a large range gate between 30 and 930 m was used to get an impression of the whole scene. Despite this relatively large range gate, the exposure time of 6  μs is still much shorter compared to the other sensors, which have exposure times of several tens of milliseconds to get a sufficient signal. Due to this very short exposure time of the GV camera and the very narrow spectral filtering between 1524 and 1600 nm, the Bengal lights are almost completely suppressed in the GV image and the whole scene can be clearly observed. By shorten the range gate to a length of only 60 m, the Bengal lights are completely suppressed. So, in this scenario, visual and passive thermal IR cameras are inadequate alternatives, and a GV system is the preferred sensor here.

In conclusion, GV can help to improve vision in poor visibility if the transmittance for the laser photons is sufficiently high. In a static scenario, frame averaging clearly enhances the image quality. A sensor mix consisting of visual, passive thermal, and active GV cameras offers a high probability to provide an image with the required quality.

3.2.

Long-Range Application

Another classical GV application is long-range imaging for target observation and identification especially by the use of very small field-of-views of the camera and a matched laser beam divergence in the area of a few milliradians. In order to show the GV capability and the potential for long-range imaging, we have conducted several field trials in maritime and urban environments and collected a wide variety of GV images of different types of targets at distances up to 27 km. In Fig. 12, some sample GV images in maritime domain are shown.

Fig. 12

GV imagery (M506) captured in maritime domain. The targets are civilian (top) and military (bottom) with distances between 3 and 10 km.

OE_56_3_031203_f012.png

The focal length of the optics was 500 mm resulting in a camera field-of-view of 17.15 mrad×12.86  mrad. The illumination laser had a maximal pulse energy of 65 mJ. It was equipped with a beam shaping unit,1 which converts a Gaussian beam profile to a homogeneous top-hat. This beam shaping unit provided a speckle-reduced laser illumination with a beam divergence exactly matched to the camera field-of-view. The distances of the targets in Fig. 12 were 3 km for the civilian ships (top) and 5 and 10 km for the military vessels (bottom) for which frame averaging was applied to enhance image quality by noise reduction. In the lower right image of Fig. 12, the target can be hardly recognized indicating the maximal range of this GV system under the prevailing weather condition is around 10 km due to the limited laser energy rather than image resolution. By narrowing the laser beam and only illuminating the central part of the camera field-of-view, higher laser intensity on target can be achieved as for the GV image in Fig. 13.

Fig. 13

GV image (M120) captured in a land scenario. The target is a military vehicle at a distance of 7 km.

OE_56_3_031203_f013.png

The focal length of the optics was 1000 mm resulting in a camera field-of-view of 6.66  mrad×6.66  mrad. The beam shaping unit, which was used for the images in Fig. 12, was not applied here. The distance of the military vehicle in Fig. 13 was 7 km and three GV frames were averaged. Due to the narrowed laser beam, the laser intensity is quite high but for greater distances, image resolution will matter. Therefore, the GV cameras were equipped with optics with a long focal length of 2032 mm resulting in very narrow camera fields-of-view of 3.28  mrad×3.28  mrad (M120) and 4.22  mrad×3.17  mrad (M506). Some sample long-range GV images in urban environment captured with these GV systems are shown in Fig. 14.

Fig. 14

GV imagery (left three: M506, right: M120) captured in urban scenarios. The targets are buildings and structures with distances between 6 and 27 km.

OE_56_3_031203_f014.png

For the left three GV images in Fig. 14, again the beam shaping unit was used providing a speckle-reduced homogenous scene illumination with a laser beam divergence exactly matched to the narrow camera field-of-view. For the most right GV image in Fig. 14, a Gaussian laser beam was illuminating only the center part of the camera field-of-view to obtain higher laser intensity on the target. Again to reduce noise, frame averaging of 5 or 10 GV images was applied. The GV images in Fig. 14 show buildings and structures at distances between 6 and 27 km, which is the record for long-range GV imaging at Fraunhofer IOSB.

In conclusion, GV shows a great potential for long-range applications including target observation and identification with ranges up several tens of kilometers.

3.3.

Speckle Reduction

If laser light is incident on a surface, which is rough with respect to the laser wavelength, constructive and destructive interferences of the different backscattered parts of the laser beam occur due to its coherence. The result is a laser illuminated image of the observed scene overlaid with the well-known speckled pattern. One approach to reduce this speckle effect is based on the summation of uncorrelated speckle patterns, which can be obtained by temporal, spatial, polarization, or wavelength diversity.2 We have studied the latter by using two different laser illuminators, one with a fixed wavelength of 1.54  μm and a spectral linewidth of 2.5 nm, the other one with a tunable wavelength between 1.1 and 2.2  μm and a spectral linewidth of 9 nm based on an optical parametric oscillator.3,4 The theoretical speckle contrast can be expressed by5

Eq. (1)

Cspeckle=11+(4·2π·Δλλ2·σh)24,
where Δλ and λ are the spectral linewidth and the center wavelength of the illumination laser and σh is the surface roughness, i.e., the root mean square of the surface height variations. Equation (1) predicts a reduction of the speckle contrast when using a spectrally broader illumination laser. In order to study this, a stone wall of a church at a distance of 445 m was used as the target. In Figs. 15 and 16 and Tables 2 and 3, some results of these studies are shown.

Fig. 15

Average images of 50 GV images (M400, loan of the French–German Research Institute of Saint-Louis, ISL) for different laser illuminations. (a) Fixed laser wavelength of 1.54  μm and narrow spectral linewidth of 2.5 nm. (b) Fixed laser wavelength of 1.53  μm and broader spectral linewidth of 9 nm. (c) Ten different laser wavelengths between 1.45 and 1.63  μm with a step size of 20 nm, for each 5 GV images.

OE_56_3_031203_f015.png

Fig. 16

Average of 550 GV images (M400) for different laser illuminations. (a) Fixed laser wavelength of 1.54  μm and narrow spectral linewidth of 2.5 nm. (b) Eleven different laser wavelengths between 1.45 and 1.65  μm with a step size of 20 nm, for each 50 GV images.

OE_56_3_031203_f016.png

Table 2

Speckle contrast for the three GV images in Fig. 15 calculated from the ROI marked with a white rectangle according to Eq. (2).

Image in Fig. 15(a)(b)(c)
Mean value in ROI (DN)298.989505.030502.290
Standard deviation in ROI (DN)28.25722.43218.962
Speckle contrast in ROI0.0950.0440.038

Table 3

Speckle contrast for the two GV images in Fig. 16 calculated from the ROI marked with a white rectangle according to Eq. (2).

Image in Fig. 16(a)(b)
Mean value in ROI (DN)313.798499.372
Standard deviation in ROI (DN)27.49315.108
Speckle contrast in ROI0.0880.030

For all GV images in Fig. 15, 50 frames were averaged. In Fig. 15(a), the narrow-band illumination laser was used. A considerable speckle pattern is existent. Frame averaging does not result in any image enhancement because all speckle patterns are identical in this static scenario. In the middle GV image of Fig. 15, the spectrally broader illumination laser was used. The speckle pattern is clearly reduced. The remaining speckle pattern can be further reduced by tuning the center wavelength of the illumination laser over a large spectral range and capture for discrete wavelengths several GV images for averaging. For each wavelength, the resulting speckle patterns are less correlated to each other and thus more suited for frame averaging than for a fixed wavelength. For the right GV image of Fig. 15, the wavelengths between 1.45 and 1.63  μm with a step size of 20 nm were used and for each wavelength, 5 GV images were captured. The entire 10×5=50  GV images were averaged. The resulting image shows a small further speckle reduction. This visual impression can be confirmed by comparing the calculated speckle contrast values in Table 2 according to

Eq. (2)

Cspeckle=σII,
where σI and I are the standard deviation and mean value, respectively, of the intensity I in a region of interest (ROI) of the GV image, where the pixel gray values would be quite homogenous if speckle effect is neglected.

There is a reduction of the speckle contrast by a factor >2 when using the spectrally broader illumination laser. Using additionally 10 different wavelengths of the broader laser, a further reduction of the speckle contrast by 14% can be observed in Table 2. In total, a reduction of the speckle contrast of 60% was achieved. By averaging all captured 550 GV images in Fig. 16, the reduction of the speckle pattern is even more obvious.

For both GV images in Fig. 16, 550 frames were averaged. In Fig. 16(a), again the narrow-band illumination laser was used. The considerably speckle pattern is still existent. For Fig. 16(b), the wavelengths between 1.45 and 1.65  μm with a step size of 20 nm were used and for each wavelength, 50 GV images were captured. The entire 11×50=550  GV images were averaged. The resulting image is nearly speckle-free. This visual impression can be again confirmed by comparing the calculated speckle contrast values in Table 3 according to Eq. (2).

In Table 3, a reduction of the speckle contrast by 66% is observable when using 11 different wavelengths of the spectrally broad illumination laser instead of one wavelength of the spectrally narrow illumination laser. Figure 17 shows some more sample results of this speckle reduction technique by wavelength diversity.

Fig. 17

GV images (M400) showing some sample results of the above speckle reduction technique by wavelength diversity. (a)–(d) a stone wall at a distance of 445 m, a clock at a distance of 665 m, a vehicle at a distance of 2405 m, and a cabin at a distance of 2455 m.

OE_56_3_031203_f017.png

For practical applications of this speckle reduction technique, a simultaneous combination of different wavelengths into one illumination system is an interesting and—because of the above results—a promising approach.

3.4.

Measurement of Laser Reflectance Signatures

For range performance prediction of a GV system (compare Sec. 3.11), image simulation or for the assessment of target signature management, it is essential to measure the target laser reflectance signature. Therefore, reference targets with a predetermined homogenous reflectance at the considered laser wavelength are positioned at the target distance (in situ). So, the laser radiation backscattered from the reference objects and the target itself are attenuated by the same atmosphere and the received intensities can be compared. In Fig. 18, a GV image of five reference targets consisting of Spectralon, which are used at Fraunhofer IOSB for SWIR GV, is shown.

Fig. 18

GV image (M506) of five diffuse reflectance targets consisting of Spectralon and mounted on a tripod for field application. The edge length of each square board is 20 cm.

OE_56_3_031203_f018.png

The laser reflectance values of the reference targets in Fig. 18 are 98%, 90%, 50%, 20%, and 5% (from left to right) for λ=1.57  μm. By plotting these reflectance values against the corresponding average pixel gray values in the GV image and fitting an appropriate function to these data points, a mapping between any target gray values and its corresponding reflectance values is obtained. With this mapping and for a spatially homogeneous laser beam profile, accurate measurements of the laser reflectance signature of a target like in Fig. 19 can be conducted.

Fig. 19

Measurement of the laser reflectance signature of a civilian target for different target aspect and sensor elevation angles. Top row: sensor elevation of 0 deg. Bottom row: sensor elevation of 20 deg. Columns: target aspect angles of 0, 45, 90, 135, 180, 225, 270, and 315 deg (from left to right).

OE_56_3_031203_f019.png

The images in Fig. 19 were captured at the Bundeswehr Technical Center for Weapons and Ammunition (WTD 91) in Meppen. There was built up a large, worldwide unique semi-circular arc with a radius of 40 m and a movable carriage for mounting several electro-optical and radar sensors. The sensor elevation angle can be raised from 0 to 90 deg. The center point of the arc is a turntable for the target. Any target aspect angle between 0 and 360 deg is available. The images in Fig. 19 were captured with a nongated high-resolution SWIR camera from Sensors Unlimited. The target was illuminated with a fiber-based, continuous-wave laser diode array with a wavelength of 1.55  μm equipped with a top-hat beam shaper. The beam divergence was smaller than the camera field-of-view. So, the square laser spot can be clearly seen in the images in Fig. 19. Due to the low coherence of the laser beam, the images are nearly speckle-free and the target signatures can be also used as pristine inputs for GV image simulation. In the bottom row of Fig. 19, the sensor elevation angle was 20 deg. Thus, these images are relevant for modeling and simulation in airborne scenarios. The entire data collection comprises passive and active signatures in several spectral bands of a large variety of different civilian and military targets with elevation angles up to 60 deg.

In addition to this close-range situation, long-range laser reflectance measurements were also conducted in the maritime domain thanks to the Bundeswehr Technical Center for ships and naval weapons (WTD 71) Surendorf site. In Fig. 20, some sample GV images of a cooperative midsized ship driving in a small circle at a distance of 2 km are shown. The range gate had a length of 150 m and its position was updated according to the actual target range measured by a laser range finder triggered to the illumination laser pulse.

Fig. 20

GV images (M506) of a midsized ship at a distance of 2 km for measurement of its laser reflectance signature at different aspect angles indicated by the arrows in the middle.

OE_56_3_031203_f020.png

In the most right GV image of Fig. 20, two square reference targets mounted on the ship can be seen. They have an edge length of 1 m. The ship has nearly the same reflectance as the left reference target. All GV images in Fig. 20 are raw images without any target segmentation. However, the fore- and background are completely black due to the specular behavior of the sea surface for a laser beam with a wavelength of 1.57  μm. Thus, SWIR GV provides a clear advantage in the maritime domain concerning target segmentation (compare Sec. 3.8).

3.5.

Silhouette Imaging

If a target is located in front of a background with a sufficient large reflectance at the illumination laser wavelength, it is possible to create a silhouette image of the target by setting the range gate of the GV system behind the target to image the background. In certain situations, the silhouette of a target can provide more information than a GV image showing the target with or without the background. In Fig. 21, three situations are shown in which the silhouette image of persons (bottom row) clearly offers more information than the conventional GV image of the person (top row).

Fig. 21

GV images (M506) of persons (top row) and corresponding silhouette images (bottom row). The distance of the person in the left and middle images is 485 m. The distances of the two persons in the right images are 900 and 1000 m, respectively.

OE_56_3_031203_f021.png

For the upper left and upper middle GV image of Fig. 21, short range gates of 50.25 and 21 m were applied, respectively. So, only the persons without background are imaged. In the corresponding silhouette images below, the shape of the hand-held objects can be more easily recognized, e.g., the two ends of the manpad in the middle column. In the right images of Fig. 21, a long range gate of 300 m was applied. So, in the upper right GV image, not only the persons are imaged but also the background. Due to a similar reflectance of the persons and the background at the illumination laser wavelength and due to the very high atmospheric turbulence (Cn22.2·1013  m2/3) and thus strong scintillations at that time in the afternoon, the two persons are merged with the background in the upper GV image. One person can be barely seen and the other one cannot be recognized at all. In the lower image, both persons’ silhouettes can be clearly seen.

In Fig. 22, the comparisons between conventional GV image and silhouette image of two vehicles and a cabin are shown.

Fig. 22

GV images (left and middle: M506, right: M120) of two vehicles and a cabin (top row) and corresponding silhouette images (bottom row). The distance of the vehicle in the left images is 215 m; the distance of the cabin in the middle images is 2455 m; and the distance of the vehicle in the right images is 7200 m.

OE_56_3_031203_f022.png

In the upper left and upper middle GV image of Fig. 22, one can see that the vehicle and the cabin have a very low laser reflectance at the top half and the roof. The same is true for the vehicle in the upper right image of Fig. 22 for its car wheels. Thus, the silhouette images below significantly simplify target segmentation here.

In a bistatic system configuration (compare Sec. 3.9), i.e., illumination laser and GV camera are considerably spatially separated from each other—there is beside the conventional silhouette also the target shadow at the background visible in the silhouette image. For a planar background, this results in two identical target silhouettes, which are only translated to each other. The GV images in Fig. 23 were captured in a bistatic configuration with a baseline—i.e., distance between laser and camera—of 2  m.

Fig. 23

(a) GV image (M506) of two persons at a distance of 225 m and (b) corresponding silhouette image. Due to the bistatic system configuration, two silhouettes are visible for each person.

OE_56_3_031203_f023.png

While in Fig. 23(a), two persons can be seen, four silhouettes are present in the corresponding right silhouette image—two conventional silhouettes and two geometrically similar shadows. This effect of bistatic system configurations on silhouette images has to be kept in mind when interpreting silhouette images counting persons.

In conclusion, for a well-reflecting background, silhouette imaging can simplify target segmentation if the target has parts with low laser reflectance to indicate the target shape. In bistatic system configurations, this is more complicated due to the additional, geometrically similar shadows present in the silhouette images.

3.6.

Three-Dimensional Imaging

By suitable processing of several GV images with different gate positions, 3-D reconstruction of the observed scene can be achieved. There are mainly two methods for 3-D imaging with GV images. The first one is the so-called sliding gates or tomography technique.3,4,611 Simply by increasing successively the position of the range gate and capturing for each position at least one GV image, the scene is sampled in depth. The fastest way of determining a range image from this sliding gates sequence is to define for each pixel the gate position for which the gray value is maximal as range value. Two examples of this fast 3-D reconstruction of a large and small scale scene are shown in Figs. 24 and 25.

Fig. 24

Fast 3-D reconstruction of an urban scene. (a) 16 GV images (M120) out of a sliding gates sequence consisting of 20 GV images with a gate shift step size of 20 m and a gate length of 100 m. (b) Result of the fast 3-D reconstruction. The camera viewing angle was horizontally and vertically changed and the scene was textured with the pixel gray values.

OE_56_3_031203_f024.png

Fig. 25

Fast 3-D reconstruction of a ship. (a) 10 GV images (M506) out of a sliding gates sequence consisting of 150 GV images with a gate shift step size of 0.75 m and a gate length of 20.25 m. (b) Color-coded range image as result of the fast 3-D reconstruction.

OE_56_3_031203_f025.png

In Fig. 24(b), the fast 3-D reconstruction of an urban scene from the left sliding gates sequence is shown. Due to the changed camera viewing angle, the scene can be fairly good interpreted in terms of distance estimations for situational awareness and mission planning.

In Fig. 25(b), the fast 3-D reconstruction of a ship from the left sliding gates sequence is shown. By comparing the colors with the corresponding range values, a ship length of 50  m can be estimated.

A more precise way of determining a range image from the sliding gates sequence is to fit an appropriate continuous function to the pixel gray values depending on the gate position [compare Fig. 27(b)] and to define the gate position of a certain point of this fit—e.g., the center point—as range value. A sample result of this precise 3-D reconstruction of a vehicle is shown in Fig. 26.

Fig. 26

Precise 3-D reconstruction of a vehicle. (a) 10 GV images (M400) out of a sliding gates sequence consisting of 14 GV images with a gate shift step size of 1.5 m and a gate length of 9.75 m. For each image, the speckle reduction technique by wavelength diversity (compare Sec. 3.3) was applied. (b) Color-coded range image as result of the precise 3-D reconstruction.

OE_56_3_031203_f026.png

Fig. 27

Measurement of the pixel gray value profiles for different intensities. (a) GV image (M506) of the reference targets with well-defined laser reflectance values at a distance of 95 m and indicated ROIs. (b) Corresponding average pixel gray values (data points) within the ROIs versus the position of the range gate for a sliding gates sequence with gate shift step size of 0.75 m and a gate length of 23.5 m. The solid lines are fitted piece-wise polynomial curves that represent the pixel gray value profiles. The vertical lines indicate the slope/plateau transitions of the corresponding fits.

OE_56_3_031203_f027.png

For each GV image of the sliding gates sequence in Fig. 26(a), the speckle reduction technique by wavelength diversity (compare Sec. 3.3) with eight different wavelengths was applied. By comparing the colors in the range image of Fig. 26(b) with the corresponding range values, a vehicle length of 5  m can be estimated. In a previous work, it has been shown that for this result, high range accuracy was achieved with a range error of only 8 cm.3 Also in the previous papers, the range error for the sliding gates method was extensively investigated in dependence of the number of averaged GV images for each gate position8 and in dependence of the gate shift step size.6 Furthermore, the range error was compared to the results of a 3-D flash LADAR system.9

The 3-D point clouds of the ship and the vehicle in Figs. 25 and 26 can serve as inputs of automatic target recognition (ATR) algorithms.12

A more sophisticated technique for 3-D imaging with GV images is the so-called slope method.13 In the GV images of the sliding gate sequence in Fig. 26, one can see at the side of the vehicle that the range gate is not abruptly but slowly rising and falling mainly due to the gate rising and falling time of the EBCMOS GV sensor. By comparing the pixel gray values in this slope region with the maximal possible pixel gray values in the plateau region of the range gate, a relative position within the slope region can be obtained. In Fig. 27, five pixel gray value profiles were determined for different intensities by capturing a sliding gates sequence of well-defined reference targets (compare Fig. 18) at a distance of 95 m with a gate shift step size of 0.75 m and a gate length of 23.5 m.

The slope/plateau transitions in Fig. 27(b) are nearly independent of the intensity. For the five reference targets, one obtains rising slope lengths of 4.3, 4.3, 4.2, 4.2, and 5.5 m; plateau lengths of 10.7, 10.6, 9.8, 9.8, and 9.2 m and falling slope lengths of 8.4, 8.5, 9.3, 9.2, and 9.6 m, respectively. On average, the rising slope has a length of 4.5 m, the plateau has a length of 10 m, and the falling slope has a length of 9 m. In order to cover a larger distance range, it is more convenient to process falling slope and plateau instead of rising slope and plateau. So, 3-D reconstruction of a distance range of 9 m can be achieved. By capturing slope and plateau GV images with a gate position difference of 9 m, the relative position within the slope can be calculated for each pixel by

Eq. (3)

xslope=9·cos[13·arccos(2·gslopeg0gplateaug01)+π3]+4.5,
where gslope and gplateau are the pixel gray values in the slope and plateau image, respectively, and g0 is the dark pixel gray value, i.e., the noise level without laser illumination. Equation (3) is obtained by application of the formulas of Cardano to invert the polynomial falling slope function of degree 3. In Fig. 28, 3-D reconstruction of the scene with the reference targets is performed by this slope method.

Fig. 28

GV images (M506) of the reference targets in the (a) gate plateau and (b) gate slope. (c) 3-D reconstruction of the scene based on Eq. (3).

OE_56_3_031203_f028.png

In the color-coded range image in Fig. 28(b), one can see that the slope method for 3-D reconstruction provides the same range values for all reference targets independently of the received intensity and is, therefore, a very robust technique. Even for the very low intensity at the right reference target with a laser reflectance of only 5%, the range value is correct. We have implemented this slope method into the GV acquisition software to provide a live 3-D visualization. Alternately, slope and plateau GV images are captured with a gate position difference of 9 m and a gate length of 23.5 m. The maximal frame rate of the GV system is limited by the laser pulse repetition rate of 20 Hz. Thus, alternately processing slope and plateau GV images result in a maximal 3-D frame rate of 20  Hz×1/2=10  Hz. In Figs. 29 and 30, 3-D reconstructions by the slope method of a dynamic scene with a cabin and two persons at a distance of 2455 m are shown.

Fig. 29

(a) and (b) Top row: two pairs (at two different times) of plateau and slope GV images (M506) of a scene with a cabin and two persons at a distance of 2455 m. The plateau and slope GV images of each pair were consecutively captured by the GV system with alternating gate positions. Bottom row: corresponding 3-D reconstructions based on Eq. (3).

OE_56_3_031203_f029.png

Fig. 30

(a) and (b) Two further pairs (at two different times) of plateau and slope GV images (M506, top row) as in Fig. 29 with corresponding 3-D reconstructions (bottom row).

OE_56_3_031203_f030.png

The dynamic scene in Figs. 29 and 30 reveals the great advantage of the slope method for 3-D reconstruction. Despite the fact that the person outside the cabin is moving and jumping into the cabin [Fig. 30(a)], 3-D reconstructions were successful, and one gets a pretty good understanding of what is going on. By looking at the color assignment in the range images, a clear distinction of the position of the moving person can be made: at the same distance as the cabin but outside [orange, Fig. 29(a)]; in front of the cabin [red, Fig. 29(b)]; just jumping into the cabin [orange, Fig. 30(a)] and standing within the cabin [turquoise, Fig. 30(b)]. Additionally in Fig. 29, a second person behind the cabin (blue) can be slightly recognized.

In conclusion, by sliding gates or slope method, a GV system has a high potential to provide 3-D information from at least two GV images in large scale for situational awareness (distance estimation) as well as in small scale for 3-D target reconstruction (input for ATR algorithms).

3.7.

GV Imaging of Persons at λ = 1.57 μm

Compared to conventional images in the visual spectral band, persons look very differently in SWIR images14 and hence also in GV images based on laser illumination at a wavelength of 1.57  μm. On the one hand, the human skin has a very low reflectance of 15% for λ=1.57  μm independently of the ethnicity of the person.15 Even reflectance values of 10%16 and 3% to 5%17 were found. The reason for this low reflectance is the absorption peak at λ=1.45  μm of water within the skin. So, human skin appears nearly black in GV images at λ=1.57  μm. On the other hand, human hairs—head hair as well as beards and eyebrows—appear always bright due to high reflectance at λ=1.57  μm independent of the hair color. In Fig. 31, some sample GV images of head hairs and foreheads under laser illumination at a wavelength of 1.57  μm are shown.

Fig. 31

GV images (M506) of head hairs with foreheads to illustrate the characteristics of laser illumination at a wavelength of 1.57  μm.

OE_56_3_031203_f031.png

In the GV images of Fig. 31, the above characteristics of laser illumination at a wavelength of 1.57  μm—dark skin and bright hair—are confirmed. The hands and neck appear also dark at λ=1.57  μm in Fig. 32(b). There is also shown a visual (a) and a MWIR (c) image of the person for comparison.

Fig. 32

(a) Comparison of visual image, (b) GV image (M506) at λ=1.57  μm, and (c) MWIR image of a person.

OE_56_3_031203_f032.png

In the MWIR image of the person in Fig. 32(c), it can be seen that the visible skin parts are—in opposite to the GV image—very bright due to the emission of thermal radiation. In comparison with the visual image in Fig. 32(a), the high reflectance of the clothes in the GV image (b) is noticeable. We have measured the laser reflectance at λ=1.57  μm of a large variety of different clothes—civilian clothes as well as military uniforms.18 In Fig. 33(a), a sample GV image of these measurements with reference targets at the left (compare Sec. 3.4) and three persons with different clothes at the right is depicted.

Fig. 33

(a) GV image (M506) of four reference targets and three persons wearing different clothes for the purpose of measuring the laser reflectance of the clothes at λ=1.57  μm. (b) Visual and GV image (M506) of persons with high reflecting clothes except one very low reflecting pair of trousers.

OE_56_3_031203_f033.png

The laser reflectance of clothes strongly depends on the material of which they consist—cotton, polyester, etc.—and the weave—denim, twill, corduroy, etc. In Fig. 33(b), the two left-most trousers have to be pointed out. Both trousers are black in the visual (a) but they differ significantly in the GV image (b) under laser illumination at a wavelength of 1.57  μm. The left one has a very low laser reflectance and appears completely black. If the jacket of left person were fabricated of the same material and weave, the person would appear nearly black and might be completely missed at larger distances. Here, a silhouette image of this person would be beneficial (compare Sec. 3.5). The right one of the two trousers has a very high laser reflectance and appears completely white. Thus, the laser reflectance prediction of unknown clothes is quite difficult.

By the use of optics with a large focal length, small field-of-views of the GV camera and thus a lot of pixels on the person can be achieved. By additionally applying a small range gate and thus suppressing the fore- and the background of the person, a GV image with a high contrast is obtained. In Fig. 34, a collection of GV images of persons at a distance of 600 m performing several actions is shown.

Fig. 34

GV images (M506) of persons at a distance of 600 m performing several actions.

OE_56_3_031203_f034.png

In Fig. 34, a lot of details of the persons can be recognized and a statement concerning the performed action can be made. Hence, a GV system can clearly support police and military forces in observation and reconnaissance missions.

3.8.

Maritime and Aerial Targets

In sea and air scenarios, maritime and aerial targets are surrounded by water and air, respectively. The water surface has a specular reflection behavior at the laser wavelength of an SWIR GV system; thus, no laser intensity from the water surface reaches directly the GV camera, as can be seen in Fig. 35(a). Here, the sailing boat has a distance of 850 m and looks nearly segmented from the background. However, there are two effects that result in a real or apparent reflection from the water surface. These effects have to be taken into account when interpreting maritime GV images. The first effect is the specular reflection at the water surface of laser photons backscattered from the target. This results in a reflection in the water below the target, which can be seen for the boat at a distance of 600 m in Fig. 35(b). This occurs for calm sea states with low wave heights, so only for stationary or slowly moving targets. Mirage is the second effect and can be seen for the boat at a distance of 3600 m in Fig. 35(c). A vertically flipped image below the target itself is produced. The axis of reflection in the GV image is indicated by a yellow dashed line. This effect occurs at longer distances and depends on the sensor elevation angle, the height-dependent air temperature gradient and the temperature difference between the water surface, and the air directly above the water surface.

Fig. 35

(a) GV images (M506) of maritime targets showing the capability for target segmentation, (b) indirect specular reflections, and (c) mirage effect at longer distances.

OE_56_3_031203_f035.png

Mostly, also in an aerial scenario, only laser photons from the target reach the GV system as it can be seen in Fig. 36. Thus, sea and air scenarios are predestined for the use of a laser range finder detector to measure the target range by the time-of-flight of the illumination laser pulse and to set the position of the range gate to this distance. Especially in the scenarios in Fig. 36, the helicopters and the grenade are flying very fast and a manual setting of the range gate position is not possible anymore.

Fig. 36

GV images (M506) of aerial targets at distances of (a) 635 m, (b) 1800 m, and (c) 3300 m. Different optics were used resulting in field-of-views of 2.46  deg×1.84  deg, 0.24  deg×0.18  deg, and 0.98  deg×0.74  deg, respectively. The lengths of the range gate were (a) and (b) 30 m and (c) 100 m. The position of the range gate was automatically tracked by the use of a laser range finder.

OE_56_3_031203_f036.png

A great advantage of an active imaging system in sea scenarios is the often different laser reflectance values of the hull of a ship on the one hand and its number or name on the other hand. So mostly, the vessel identification number can be recognized in GV images. In Figs. 37 and 38, GV images of a vessel at different distances together with the magnification of its vessel number are shown. The field-of-view of the GV camera is 0.98  deg×0.74  deg in Fig. 37 and 0.24  deg×0.18  deg in Fig. 38. The beam divergence of the illumination laser is always matched to these field-of-views.

Fig. 37

GV images (M506) of a vessel at distances of (a) 2 km, (b) 3 km, and (c) 4 km. The field-of-view of the GV camera is 0.98  deg×0.74  deg and the beam divergence is matched to this field-of-view. In the upper left corner of each GV image, a magnification of the vessel number is shown (yellow rectangles).

OE_56_3_031203_f037.png

Fig. 38

GV images (M506) of a vessel at distances of (a) 5 km, (b) 6 km, and (c) 7 km. The field-of-view of the GV camera is 0.24  deg×0.18  deg and the beam divergence is matched to this field-of-view. In the upper left corner of each GV image, a magnification of the vessel number is shown (yellow rectangles).

OE_56_3_031203_f038.png

In Fig. 37, it can be seen that the maximal range for recognition of the vessel number is 3000 m for a field-of-view of the GV camera of 0.98  deg×0.74  deg. Using a narrower field-of-view of 0.24  deg×0.18  deg in Fig. 38, the maximal range can be increased to 6000 m.

To sum up, a GV system has a high potential for target recognition and identification at long ranges in sea and air scenarios (maritime/aerial domain awareness and fine tracking in C-RAM application). Target segmentation is simplified by low background signals with the drawback that no silhouette imaging is possible.

3.9.

Bistatic GV Imaging

In a bistatic GV system configuration, the illumination laser and GV camera are considerably spatially separated from each other (compare end of Sec. 3.5). From a military point-of-view, a bistatic configuration is of high interest because the detectability of the illumination laser by a foreign laser warning system is always a drawback of an active system. By a laser warning system, the GV camera itself cannot be located, e.g., for dazzling. The angle between the laser beam and the GV camera line-of-sight is called bistatic angle. The larger the bistatic angle, the more of the target shadow at the background is present in the GV image. In Fig. 39, three GV images of persons walking at a distance of 250 m captured by a bistatic GV system are shown.

Fig. 39

Bistatic GV images (M506) of persons walking at a distance of 250 m with a bistatic angle of 0.5 deg. The range gates were (a) and (b) 230 to 280 m and (c) 230 to 430 m. The middle GV image is a cropped magnification to see more details.

OE_56_3_031203_f039.png

In Fig. 39(a), no differences compared to a monostatic GV image can be seen due to a short-range gate of 50 m. When the persons cross each other like in Fig. 39(b), an important difference shows up. The shadow line along the shape of the person in the foreground simplifies the separation to the person in the background. In a monostatic configuration, separation would be more difficult due to the very similar laser reflectance values of the person clothes. In Fig. 39(c), the range gate was increased to 200 m. Hence, the background is also imaged and the person shadows at the background can be recognized yielding a pretty good 3-D impression of the scene. The bistatic angle in this configuration was 0.5 deg.

In addition to this shadow effect, retroreflections can be significantly suppressed by a bistatic configuration.4,19 In Fig. 40, GV images of two vehicles at a distance of 600 m simultaneously captured by a mono- and bistatic GV system are compared. The monostatic GV image was captured by the Swedish Defence Research Agency FOI within the framework of a cooperation with the IOSB Predecessor Institute FGAN-FOM.

Fig. 40

Monostatic (a, M400) and bistatic (b, M120) GV images of two vehicles at a distance of 600 m. The bistatic angle in the right GV image was 27 deg. The range gates were 590 to 640 m and 750 to 770 m, respectively.

OE_56_3_031203_f040.png

In Fig. 40(a), strong retroreflections can be seen at the front lights of the left vehicle. In Fig. 40(b), these retroreflections are completely suppressed. Of course, the received laser intensity is much lower because fewer laser photons are reflected to the GV camera under the large bistatic angle of 27 deg. In Ref. 4, some more scenarios are shown in which a bistatic configuration offers advantages compared to a monostatic configuration. A disadvantage of a bistatic configuration with a large bistatic angle is that not all target parts can be imaged. This can be seen in Fig. 40(b). The side of the left vehicle is not illuminated by the laser due to shadowing of the vehicle front.

Nevertheless, GV systems in a bistatic configuration provide some advantages compared to a monostatic configuration including the military aspect that if the illumination laser is detected by a laser warning system, the position of the GV camera and hence the location of the operator are still unknown for a foreign reconnaissance system.

3.10.

Looking Through Windows

With a GV system operating in the SWIR spectral band, it is possible to look through windows into car interiors. The laser transmission depends on the type of window, e.g., multilayer or insulated glass. In Fig. 41, a GV image of a person sitting in a vehicle with partially opened window is shown.

Fig. 41

GV image (M506) of a person sitting in a vehicle with partially opened window. Two rectangles indicate ROIs for estimation of laser transmission.

OE_56_3_031203_f041.png

The person in the vehicle can be recognized through the car glass despite the low laser reflectance of the skin and the jacket. By calculating the mean gray values g0 and gattenuated in the upper red and lower yellow rectangles, the laser transmission of the window can be estimated by

Eq. (4)

τ=gattenuatedg0200  DN500  DN0.63.

In the active SWIR images in Fig. 42, it can be clearly discriminated whether a person is sitting in the vehicle at the wheel or not independently of the target aspect angle and the sensor elevation angle.

Fig. 42

Cropped, active SWIR images at λ=1.55  μm of a vehicle with person (top row) and without person (bottom row) at the wheel. The sensor elevation angle is 0 deg (first and second column) and 20 deg (third and fourth column). The target aspect angle is 0 deg (first and third column) and 90 deg (second and fourth column).

OE_56_3_031203_f042.png

The active SWIR images at λ=1.55  μm in Fig. 42 were captured with a nongated high-resolution SWIR camera from Sensors Unlimited. The target was illuminated with a fiber-based, continuous-wave laser diode array with a wavelength of 1.55  μm equipped with a top-hat beam shaper (compare Sec. 3.4).

So, SWIR GV systems can highly support police and military forces in observation missions to look inside interiors that are separated by glass windows—e.g., car interiors or building—in order to give hints about what is happening inside.

3.11.

Analytical Range Performance Prediction

In order to predict the performance of a GV system for different tasks—e.g., target detection, recognition, and identification—in dependence of the target distance, an analytical range performance model based on the target task performance metric was developed by the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate.20 Various system, target, and environment parameters are input to the model. Intermediate results are modulation transfer functions (MTF), noise variances, and contrast threshold functions. The final results are range-dependent probability values for fulfilling a certain task specified by the task difficulty parameter V50. We have implemented this analytical range performance model in MATLAB and extended it to slant-path applications.21 A sample final result is shown in Fig. 43.

Fig. 43

Output of the analytical range performance model: probability of fulfilling different tasks by an observer using a generic GV system as functions of the target distance. The tasks are target detection (blue, solid), target recognition (red, dotted), and target identification (green, dashed). The horizontal black line is the 75% level.

OE_56_3_031203_f043.png

The generic GV system, which was modeled in Fig. 43 for a specific environment and target, achieved a 75% detection range of 14 km, a 75% recognition range of 5.1 km, and a 75% identification range of 4.3 km. The main impacts of a slant laser beam propagation path on range performance can be noticed by a significantly reduced atmospheric turbulence effect and by a higher atmospheric transmission compared to a horizontal path. Hence, a larger range is achievable. For modeling the atmospheric turbulence, we have considered two different height profiles of the refractive index structure parameter Cn2 up to a height of 20 km. The first one is composed of the Kaimal/Walters-Kunkel profile22 for the atmospheric boundary-layer, i.e., up to the inversion layer, and the Hufnagel-Valley 5/7 profile23 for the free atmosphere. The second one was introduced by Kukharets and Tsvang and modified by Murphy.22 Both profiles are depicted in Fig. 44 for an inversion layer at a height of 3000 m.

Fig. 44

Height-dependent Cn2 profiles for analytically modeling the range performance of GV systems in slant-path scenarios.

OE_56_3_031203_f044.png

These Cn2 profiles are used in the slant-path formulas for the turbulence MTF and the scintillation index. Additional to these turbulence extensions, we have changed the procedure of determining the atmospheric transmission. Instead of using Beer’s Law with a constant atmospheric extinction coefficient, the external software FASCODE (Ontar Corp.,24) is accessed to calculate the atmospheric transmission from the extensive database HiTran for each range step, depending on laser wavelength, sensor height, and target height. Several weather conditions and climes can be chosen.

Given the multitude of experimental results, we were able to validate the analytical range performance model for horizontal paths. Figure 45 shows exemplarily a good accordance between the maximal system range subjectively estimated from real measurements of a vessel (a) and the model results for the corresponding system, environment, and target parameter (b).

Fig. 45

Comparison between real measurements and theoretical performance prediction showing good accordance. (a) GV images (M506) of a vessel at ranges between 1500 and 7000 m. The display ranges in brackets (minimal image gray value to maximal image gray value) indicate a subjectively estimated maximal system range of 8 to 9 km (depending on task). (b) Theoretical results of the analytical performance model for the corresponding system, environment, and target parameter also yielding a maximal system range of 8 to 9 km (depending on task).

OE_56_3_031203_f045.png

4.

Conclusion and Outlook

In the framework of this paper, a review of SWIR laser GV activities in the past decade at Fraunhofer IOSB was presented. This review comprised military and civilian applications in maritime and land domain—in particular vision enhancement in poor visibility, long-range applications, silhouette imaging, 3-D imaging by sliding gates and slope method, bistatic GV imaging, and looking through windows. In addition, theoretical studies that were conducted—such as estimating 3-D accuracy or modeling range performance—were presented.

As future work, we will investigate in-depth the potential of SWIR laser GV for the penetration of pyrotechnic effects in soccer stadiums (compare Sec. 3.1) and perform SWIR laser GV image simulation with experimental verification.

Acknowledgments

The authors would like to thank the Bundeswehr Technical Center for Weapons and Ammunition (WTD 91) in Meppen and the Bundeswehr Technical Center for Ships and Naval Weapons (WTD 71) site Surendorf for funding and providing infrastructures and targets during several measurement campaigns. Furthermore, the authors express their thanks to Frank Willutzki, Frank van Putten, Simon Brunner, and Richard Frank from Fraunhofer IOSB for their tireless assistance and support in carrying out the numerous experiments cited in this report.

References

1. 

M. Laurenzis et al., “Homogeneous and speckle-free laser illumination for range-gated imaging and active polarimetry,” Opt. Eng., 51 (6), 061302 (2012). http://dx.doi.org/10.1117/1.OE.51.6.061302 Google Scholar

2. 

J. W. Goodman, “Some fundamental properties of speckle,” J. Opt. Soc. Am., 66 (11), 1145 –1150 (1976). http://dx.doi.org/10.1364/JOSA.66.001145 JOSAAH 0030-3941 Google Scholar

3. 

B. Göhler, P. Lutzmann and G. Anstett, “3D imaging with range gated laser systems using speckle reduction techniques to improve the depth accuracy,” Proc. SPIE, 7113 711307 (2008). http://dx.doi.org/10.1117/12.799740 PSISDG 0277-786X Google Scholar

4. 

E. Repasi et al., “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt., 48 (31), 5956 –5969 (2009). http://dx.doi.org/10.1364/AO.48.005956 JOAOF8 1464-4258 Google Scholar

5. 

J. C. Dainty, “The statistics of speckle patterns,” Prog. Opt., 14 1 –46 (1976). http://dx.doi.org/10.1016/S0079-6638(08)70249-X POPTAN 0079-6638 Google Scholar

6. 

B. Göhler and P. Lutzmann, “Range accuracy of a gated-viewing system as a function of the gate shift step size,” Proc. SPIE, 8897 889708 (2013). http://dx.doi.org/10.1117/12.2029590 PSISDG 0277-786X Google Scholar

7. 

M. Laurenzis et al., “Investigation of range-gated imaging in scattering environments,” Opt. Eng., 51 (6), 061303 (2012). http://dx.doi.org/10.1117/1.OE.51.6.061303 Google Scholar

8. 

B. Göhler and P. Lutzmann, “Range accuracy of a gated-viewing system as a function of the number of averaged images,” Proc. SPIE, 8542 854205 (2012). http://dx.doi.org/10.1117/12.974704 PSISDG 0277-786X Google Scholar

9. 

B. Göhler and P. Lutzmann, “Range accuracy of a gated-viewing system compared to a 3-D Flash LADAR under different turbulence conditions,” Proc. SPIE, 7835 783504 (2010). http://dx.doi.org/10.1117/12.865097 PSISDG 0277-786X Google Scholar

10. 

P. Andersson, “Long-range three-dimensional imaging using range-gated laser radar images,” Opt. Eng., 45 (3), 034301 (2006). http://dx.doi.org/10.1117/1.2183668 Google Scholar

11. 

J. Busck, “Underwater 3-D optical imaging with a gated viewing laser radar,” J. Opt. Eng., 44 (11), 116001 (2005). http://dx.doi.org/10.1117/1.2127895 Google Scholar

12. 

W. Armbruster and M. Hammer, “Maritime target identification in flash-ladar imagery,” Proc. SPIE, 8391 83910C (2012). http://dx.doi.org/10.1117/12.920264 PSISDG 0277-786X Google Scholar

13. 

M. Laurenzis et al., “3D range-gated imaging at infrared wavelengths with super-resolution depth mapping,” Proc. SPIE, 7298 729833 (2009). http://dx.doi.org/10.1117/12.818428 PSISDG 0277-786X Google Scholar

14. 

B. E. Lemoff et al., “Automated, long-range, night/day, active-SWIR face recognition system,” Proc. SPIE, 9070 90703I (2014). http://dx.doi.org/10.1117/12.2052716 PSISDG 0277-786X Google Scholar

15. 

S. Vyas, A. Banerjee and P. Burlina, “Estimating physiological skin parameters from hyperspectral signatures,” J. Biomed. Opt., 18 (5), 057008 (2013). http://dx.doi.org/10.1117/1.JBO.18.5.057008 JBOPFO 1083-3668 Google Scholar

16. 

C. C. Cooksey, B. K. Tsai and D. W. Allen, “Spectral reflectance variability of skin and attributing factors,” Proc. SPIE, 9461 94611M (2015). http://dx.doi.org/10.1117/12.2184485 PSISDG 0277-786X Google Scholar

17. 

O. Steinvall et al., “Laser imaging of small surface vessels and people at sea,” Proc. SPIE, 7684 768417 (2010). http://dx.doi.org/10.1117/12.849388 PSISDG 0277-786X Google Scholar

18. 

C. Grönwall et al., “Active and passive imaging of clothes in the NIR and SWIR regions for reflectivity analysis,” Appl. Opt., 55 (20), 5292 –5303 (2016). http://dx.doi.org/10.1364/AO.55.005292 JOAOF8 1464-4258 Google Scholar

19. 

F. Christnacher et al., “Bistatic range-gated active imaging in vehicles with LEDs or headlights illumination,” Proc. SPIE, 7675 76750J (2010). http://dx.doi.org/10.1117/12.852895 PSISDG 0277-786X Google Scholar

20. 

R. L. Espinola et al., “Modeling the target acquisition performance of active imaging systems,” Opt. Express, 15 (7), 3816 –3832 (2007). http://dx.doi.org/10.1364/OE.15.003816 OPEXFF 1094-4087 Google Scholar

21. 

B. Göhler and P. Lutzmann, “An analytical performance model for active imaging systems including slant-path applications,” in 4th Int. Symp. on Optronics in Defense and Security (OPTRO ‘10), (2010). Google Scholar

22. 

R. R. Beland, “Propagation through atmospheric optical turbulence,” The Infrared and Electro-Optical Systems Handbook, 2 SPIE Optical Engineering Press, Bellingham, Washington (1993). Google Scholar

23. 

M. C. Roggemann and B. M. Welsh, Imaging Through Turbulence, CRC Press, Boca Raton, Florida (1996). Google Scholar

Biography

Benjamin Göhler received his diploma degree in mathematics from the Technical University of Karlsruhe in 2007. In 2007, he joined the Optronics Department at the Research Institute for Optronics and Pattern Recognition (FOM) in Ettlingen. In 2010, FOM became Fraunhofer IOSB where he is working in the Group Laser Sensors. His research area covers modeling of laser systems and processing of field trial data. He is a member of several bilateral and NATO working groups.

Peter Lutzmann received his diploma degree in physics from the University of Ulm in 1985. In 1985, he joined the Optronics Department at the Research Institute for Optics (FFO) in Tübingen. FFO became FOM in 2000 and Fraunhofer IOSB in 2010. Since 2010, he has been the leader of the Group Laser Sensors. His research area comprises coherent laser radar and laser imaging techniques. He is a member of several bilateral and NATO working groups.

© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
Benjamin Göhler and Peter Lutzmann "Review on short-wavelength infrared laser gated-viewing at Fraunhofer IOSB," Optical Engineering 56(3), 031203 (27 September 2016). https://doi.org/10.1117/1.OE.56.3.031203
Published: 27 September 2016
Lens.org Logo
CITATIONS
Cited by 25 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Short wave infrared radiation

Reflectivity

Sensors

3D image reconstruction

Imaging systems

Infrared lasers

Back to Top