A gated-viewing (GV) system consists of a pulsed laser illuminator and a synchronizable GV camera. After the laser pulse is emitted, the GV camera waits a predefined delay time until the detector elements integrate all photons that arrive within a very short integration time. Only laser photons that arrive from the corresponding range gate are collected; the fore- and the background are suppressed. The camera delay time determines the gate position and the integration time determines the gate length. The result is a range-gated image with a high target/background contrast as it can be seen in the right image of Fig. 1 for a vehicle at a distance of 480 m and a laser wavelength of . For comparison, also a nongated, passive image of the vehicle in the wavelength region between 950 and 1650 nm in the short-wavelength infrared (SWIR: 1 to ) is shown at the left.
In Sec. 2, a brief historical overview of some conducted GV experiments in the past 40 years at Fraunhofer IOSB and its predecessor research institutes is given. The main Sec. 3 is divided into 11 sections giving a broad overview of the performed SWIR laser GV activities.
The very first experiments at the Fraunhofer IOSB predecessor research institute for optics (FGAN-FfO, Tübingen) in the area of laser GV were conducted in the mid 1970s. In 1976, a GEN-2 image intensifier was combined with a laser diode array emitting in the near infrared (NIR: 750 nm to ) at a wavelength of . At the photocathode of the image intensifier, the incoming NIR photons are converted into electrons, which are accelerated by a high-voltage through a microchannel plate in which they are multiplied onto a phosphor screen. By exact control of the high-voltage timing parameter, only photons from a predefined distance range—the so-called “gate”—reach the screen and a range-gated image is obtained. In Fig. 2, two NIR GV images using this first demonstrator system are shown. In the left image, the gate was set at the same range as a vehicle and a bar target in the scene. Thus, a NIR image of these objects is produced. In the right image, the gate was set behind the vehicle and the bar target at the background. So, a silhouette image is obtained.
In 1985, also the midwavelength infrared (MWIR: 3 to ) spectral band was studied for active imaging. For these investigations, only spectral filtering was applied instead of range-gating due to the lack of a gating possibility for available MWIR cameras. As illumination source a deuterium fluoride (DF) laser with a wavelength of was used. In Fig. 3, images of the hot flame of a camping stove with a text board behind using an indium antimonide (InSb) camera are shown. The left one is a passive image without spectral filtering. Only the hot flame can be seen. In the right image, the DF laser is illuminating the text board through the flame and a very narrow band-pass filter with a high transmission at the laser wavelength is mounted in front of the detector. Due to the active laser illumination and the spectral filtering, the text can be recognized through the hot flame.
In the mid 1990s, the U.S. company Intevac Photonics opened the SWIR spectral region for imaging by developing the laser illuminated viewing and ranging (LIVAR®) systems based on indium gallium arsenide (InGaAs)/indium phosphide (InP) transferred electron (TE) photocathodes and electron bombarded charge-coupled devices (EBCCD), or later electron bombarded complementary metal-oxide semiconductors (EBCMOS). This InGaAs/InP photocathodes are sensitive in the SWIR spectral band between 950 and 1650 nm with quantum efficiency at 1550 nm. Concerning eye safety, this development was an important breakthrough in active imaging due to the possibility to illuminate the scene with lasers emitting in the so-called retina-safe wavelength region beyond 1400 nm.
In Fig. 4, the nominal ocular hazard distance (NOHD), which is the safety distance for ocular exposure, and the extended NOHD, which is the safety distance for viewing through optical aids like googles, are plotted versus the laser wavelength for a pulsed laser with a pulse energy of 50 mJ, a beam diameter of 8 mm, a full-angle beam divergence of 4.4 mrad, a pulse repetition frequency PRF of 10 Hz, and a pulse duration of 10 ns.
The advantage of the SWIR compared to the NIR spectral band with regard to eye safety can be clearly seen in Fig. 4. At a wavelength of 1500 nm, the NOHD and the extended NOHD are reduced by factors of 338 and 207, respectively, compared to a wavelength of 800 nm: and extended .
Since the year 2000, active imaging in the SWIR spectral band has become an important research area at the Fraunhofer IOSB Predecessor Research Institute FGAN-FOM due to the military potential and the commercial availability of the first LIVAR® camera model 120. During the first years of the new millennium, a SWIR laser GV system demonstrator was built up by using this LIVAR® M120 camera for GV imaging and a Raman shifted Nd:YAG laser with a wavelength of for illumination. Until today, also the improved successor cameras LIVAR® M400 and LIVAR® M506 were used at Fraunhofer IOSB for range-gated SWIR imaging in combination with an OPO shifted Nd:YAG laser with a tunable SWIR wavelength or a fixed wavelength of . The most relevant parameters of these GV cameras are listed in Table 1.
Most relevant parameter of the three GV cameras LIVAR® M120, LIVAR® M400, and LIVAR® M506, which were used at Fraunhofer IOSB for range-gated SWIR imaging.
|LIVAR® M120||LIVAR® M400||LIVAR® M506|
|Spectral response||950 to 1650 nm|
|Dark current||<1 electron/(pixel·μs)|
|Number of detector elements||512×512||640×480||1280×1024|
|Typical image resolution||512×512||640×480||640×480|
|Size of detector elements||13 μm×13 μm||12 μm×12 μm||6.7 μm×6.7 μm|
|Limiting resolution (lp/mm)||32||28|
|Maximal frame rate (Hz)||17||28.5||30|
|Digital video output depth||10 bit|
|Dynamic range||48 dB|
|Gate delay step size||5 ns|
|Minimal gate width (ns)||150||70|
The findings and results of different SWIR GV experiments, which were conducted at Fraunhofer IOSB during the last decade will be reviewed in 11 sections of the following section. The advantages and drawbacks compared to other sensors will be shown. The camera model, which was used for the shown GV images, is given in the corresponding figure caption denoted in brackets (M120/M400/M506).
Review of SWIR Laser GV Activities
Improving Vision in Poor Visibility
The characteristic of GV imaging is the possibility to suppress the fore- and background of an object of interest. On the one hand, by suppressing the background of an object, one obtains a much higher target/background contrast than for a nongated image (compare Fig. 1). On the other hand, backscattered photons from particles in the atmosphere between sensor and object are not captured yielding a great potential for GV imaging in poor visibility. The left image in Fig. 5 shows an urban scene during dusk and natural haze in the visible spectral band (VIS: 380 to 750 nm). The middle image is a magnification of the left one and shows a house 2 km away at the opposite hill with a very low contrast. The right image is a GV image with a gate from 1800 to 2050 m. It was captured synchronously with the visual image.
The house can be recognized in the SWIR GV image much easier than in the visual image due to a significant higher contrast resulting from laser illumination and range-gating. Compared to the SWIR range, a greater part of the visible photons from the house are absorbed or scattered by water drops in the atmosphere and a smaller amount reaches the VIS camera. For the SWIR GV image, all laser photons backscattered from the atmosphere up to 1800 m are suppressed and only the laser photons backscattered from the range gate are collected by the GV camera.
From a military point of view, the improvement of vision in poor visibility due to artificial smoke is of also strong interest. In the scenario in Fig. 6, smoke was produced by a military smoke grenade at a distance of 1000 m. This smoke grenade was specifically designed to have an effect in the visible spectral range. A vehicle and a bar target have been positioned at a distance of 1640 m. The range gate of the GV system was set to a length of 24 m and centered to the target range (1628 to 1652 m).
The targets can only be partially seen in the GV images if the density of the smoke is low enough. For dense smoke, all laser photons are scattered and absorbed within the smoke cloud and no laser photons from the range gate can reach the detector. For passive MWIR or LWIR images, there is nearly no impact by the smoke (MWIR see Fig. 7(b)) due to the significant higher transmittance of the longer wavelength compared to the SWIR. Thus, passive thermal IR cameras should be the preferred sensors here. However, there are also smoke grenades, which are explicitly designed for the thermal infrared region. In a static scenario, the GV image quality can be clearly improved by averaging consecutive frames of a GV image sequence [Fig. 7(a)].
In addition to this situation with smoke from a typical military smoke grenade, we have also studied the potential of GV in a scenario with a person standing in the midst of artificial fog from a fog machine at a distance of 117 m. The fog fluid is a mixture of polyhydric alcohols and water. In Fig. 8, a visual image of the person in the fog is shown (a) together with the synchronously captured GV image (b) with a gate from 96 to 126 m.
The person in the fog can be clearly seen in the GV image due to the high reflectance of the clothes in the SWIR band. The skin has a very high absorption at and thus appears nearly black (compare Sec. 3.7). If the density of the fog gets too high like in the center of the GV image, too much laser photons are backscattered from the fog itself. So, in opposite to the situation of Fig. 6, the fog in front of the person is imaged resulting in a low person contrast. For passive MWIR or LWIR images, there is nearly no impact by the fog due to the significant higher transmittance of the longer wavelength compared to the SWIR. Thus, again passive thermal IR cameras should be the preferred sensors here.
In a further scenario, three oil tanks with burning diesel fuel were deployed at a distance of 150 m and the target vehicle was positioned at a distance of 450 m [Fig. 9(a)]. The range gate of the GV system was set to a length of 27 m and centered to the target range (437 to 464 m).
The burning diesel was producing very hot flames and a dark smoke with carbon particles. On the one hand, these carbon particles strongly attenuate the VIS and SWIR radiation. Therefore, no laser photons are transmitted to the target and can reach the sensor from the range gate. This attenuation can be observed in the most left GV image in Fig. 9(b). On the other hand, the very hot flames of the burning diesel fuel cause severe turbulences due to significant thermal gradients. In the middle and right GV image of Fig. 9(b), the resulting scintillation and defocusing effect can be observed, respectively. Passive thermal cameras are no alternatives in this scenario because MWIR and LWIR images are partially saturated and dazzled due to the hot flames and gases. Again in a static scenario, the GV image quality can be clearly improved by averaging consecutive frames of a GV image sequence (Fig. 10).
There can also be poor visibility due to pyrotechnics such as those that are illegally burned in soccer stadiums by “ultras” (fans prone to extreme behavior). Especially for police forces and security guards in stadiums, it is extremely important to be aware of the situation. They have to know what potential troublemakers are doing behind pyrotechnical dazzle and smoke to prevent injuries, e.g., due to the very high temperatures of burning Bengal lights. Different pyrotechnics producing bright light effects and colored smoke have been investigated in terms of their influence on different sensors, including GV. As an example, one of these investigations is shown in the following. At a distance of 450 m, two persons were burning Bengal lights and swinging them back and forth, as is typical in fan curves. This experiment was conducted at night time. Figure 11 shows the resulting images captured synchronously with a visual camera, a MWIR camera, and a GV system.
In the visual image of the scene (a), the center of the Bengal lights appears completely white because of saturation of the sensor pixels due to the dazzling light. Additionally, this bright light illuminates the surrounding smoke and creates an efficient obstruction of the view behind the Bengal lights. The saturation effect occurs also in the MWIR image of the scene (b) due to the very high temperature of the flares. Further, the saturated area around the left Bengal light is even larger than in the visual image due to hot gases. Because the line-of-sight of the MWIR camera is slightly different compared to the visual image, the right Bengal light is behind the person and the saturated area appears smaller. For the GV image of the scene (c), a large range gate between 30 and 930 m was used to get an impression of the whole scene. Despite this relatively large range gate, the exposure time of is still much shorter compared to the other sensors, which have exposure times of several tens of milliseconds to get a sufficient signal. Due to this very short exposure time of the GV camera and the very narrow spectral filtering between 1524 and 1600 nm, the Bengal lights are almost completely suppressed in the GV image and the whole scene can be clearly observed. By shorten the range gate to a length of only 60 m, the Bengal lights are completely suppressed. So, in this scenario, visual and passive thermal IR cameras are inadequate alternatives, and a GV system is the preferred sensor here.
In conclusion, GV can help to improve vision in poor visibility if the transmittance for the laser photons is sufficiently high. In a static scenario, frame averaging clearly enhances the image quality. A sensor mix consisting of visual, passive thermal, and active GV cameras offers a high probability to provide an image with the required quality.
Another classical GV application is long-range imaging for target observation and identification especially by the use of very small field-of-views of the camera and a matched laser beam divergence in the area of a few milliradians. In order to show the GV capability and the potential for long-range imaging, we have conducted several field trials in maritime and urban environments and collected a wide variety of GV images of different types of targets at distances up to 27 km. In Fig. 12, some sample GV images in maritime domain are shown.
The focal length of the optics was 500 mm resulting in a camera field-of-view of 17.15 . The illumination laser had a maximal pulse energy of 65 mJ. It was equipped with a beam shaping unit,1 which converts a Gaussian beam profile to a homogeneous top-hat. This beam shaping unit provided a speckle-reduced laser illumination with a beam divergence exactly matched to the camera field-of-view. The distances of the targets in Fig. 12 were 3 km for the civilian ships (top) and 5 and 10 km for the military vessels (bottom) for which frame averaging was applied to enhance image quality by noise reduction. In the lower right image of Fig. 12, the target can be hardly recognized indicating the maximal range of this GV system under the prevailing weather condition is around 10 km due to the limited laser energy rather than image resolution. By narrowing the laser beam and only illuminating the central part of the camera field-of-view, higher laser intensity on target can be achieved as for the GV image in Fig. 13.
The focal length of the optics was 1000 mm resulting in a camera field-of-view of . The beam shaping unit, which was used for the images in Fig. 12, was not applied here. The distance of the military vehicle in Fig. 13 was 7 km and three GV frames were averaged. Due to the narrowed laser beam, the laser intensity is quite high but for greater distances, image resolution will matter. Therefore, the GV cameras were equipped with optics with a long focal length of 2032 mm resulting in very narrow camera fields-of-view of (M120) and (M506). Some sample long-range GV images in urban environment captured with these GV systems are shown in Fig. 14.
For the left three GV images in Fig. 14, again the beam shaping unit was used providing a speckle-reduced homogenous scene illumination with a laser beam divergence exactly matched to the narrow camera field-of-view. For the most right GV image in Fig. 14, a Gaussian laser beam was illuminating only the center part of the camera field-of-view to obtain higher laser intensity on the target. Again to reduce noise, frame averaging of 5 or 10 GV images was applied. The GV images in Fig. 14 show buildings and structures at distances between 6 and 27 km, which is the record for long-range GV imaging at Fraunhofer IOSB.
In conclusion, GV shows a great potential for long-range applications including target observation and identification with ranges up several tens of kilometers.
If laser light is incident on a surface, which is rough with respect to the laser wavelength, constructive and destructive interferences of the different backscattered parts of the laser beam occur due to its coherence. The result is a laser illuminated image of the observed scene overlaid with the well-known speckled pattern. One approach to reduce this speckle effect is based on the summation of uncorrelated speckle patterns, which can be obtained by temporal, spatial, polarization, or wavelength diversity.2 We have studied the latter by using two different laser illuminators, one with a fixed wavelength of and a spectral linewidth of 2.5 nm, the other one with a tunable wavelength between 1.1 and and a spectral linewidth of 9 nm based on an optical parametric oscillator.3,4 The theoretical speckle contrast can be expressed by5Figs. 15 and 16 and Tables 2 and 3, some results of these studies are shown.
Speckle contrast for the three GV images in Fig. 15 calculated from the ROI marked with a white rectangle according to Eq. (2).
|Image in Fig. 15||(a)||(b)||(c)|
|Mean value in ROI (DN)||298.989||505.030||502.290|
|Standard deviation in ROI (DN)||28.257||22.432||18.962|
|Speckle contrast in ROI||0.095||0.044||0.038|
Speckle contrast for the two GV images in Fig. 16 calculated from the ROI marked with a white rectangle according to Eq. (2).
|Image in Fig. 16||(a)||(b)|
|Mean value in ROI (DN)||313.798||499.372|
|Standard deviation in ROI (DN)||27.493||15.108|
|Speckle contrast in ROI||0.088||0.030|
For all GV images in Fig. 15, 50 frames were averaged. In Fig. 15(a), the narrow-band illumination laser was used. A considerable speckle pattern is existent. Frame averaging does not result in any image enhancement because all speckle patterns are identical in this static scenario. In the middle GV image of Fig. 15, the spectrally broader illumination laser was used. The speckle pattern is clearly reduced. The remaining speckle pattern can be further reduced by tuning the center wavelength of the illumination laser over a large spectral range and capture for discrete wavelengths several GV images for averaging. For each wavelength, the resulting speckle patterns are less correlated to each other and thus more suited for frame averaging than for a fixed wavelength. For the right GV image of Fig. 15, the wavelengths between 1.45 and with a step size of 20 nm were used and for each wavelength, 5 GV images were captured. The entire images were averaged. The resulting image shows a small further speckle reduction. This visual impression can be confirmed by comparing the calculated speckle contrast values in Table 2 according to
There is a reduction of the speckle contrast by a factor when using the spectrally broader illumination laser. Using additionally 10 different wavelengths of the broader laser, a further reduction of the speckle contrast by can be observed in Table 2. In total, a reduction of the speckle contrast of 60% was achieved. By averaging all captured 550 GV images in Fig. 16, the reduction of the speckle pattern is even more obvious.
For both GV images in Fig. 16, 550 frames were averaged. In Fig. 16(a), again the narrow-band illumination laser was used. The considerably speckle pattern is still existent. For Fig. 16(b), the wavelengths between 1.45 and with a step size of 20 nm were used and for each wavelength, 50 GV images were captured. The entire images were averaged. The resulting image is nearly speckle-free. This visual impression can be again confirmed by comparing the calculated speckle contrast values in Table 3 according to Eq. (2).
In Table 3, a reduction of the speckle contrast by 66% is observable when using 11 different wavelengths of the spectrally broad illumination laser instead of one wavelength of the spectrally narrow illumination laser. Figure 17 shows some more sample results of this speckle reduction technique by wavelength diversity.
For practical applications of this speckle reduction technique, a simultaneous combination of different wavelengths into one illumination system is an interesting and—because of the above results—a promising approach.
Measurement of Laser Reflectance Signatures
For range performance prediction of a GV system (compare Sec. 3.11), image simulation or for the assessment of target signature management, it is essential to measure the target laser reflectance signature. Therefore, reference targets with a predetermined homogenous reflectance at the considered laser wavelength are positioned at the target distance (in situ). So, the laser radiation backscattered from the reference objects and the target itself are attenuated by the same atmosphere and the received intensities can be compared. In Fig. 18, a GV image of five reference targets consisting of Spectralon, which are used at Fraunhofer IOSB for SWIR GV, is shown.
The laser reflectance values of the reference targets in Fig. 18 are 98%, 90%, 50%, 20%, and 5% (from left to right) for . By plotting these reflectance values against the corresponding average pixel gray values in the GV image and fitting an appropriate function to these data points, a mapping between any target gray values and its corresponding reflectance values is obtained. With this mapping and for a spatially homogeneous laser beam profile, accurate measurements of the laser reflectance signature of a target like in Fig. 19 can be conducted.
The images in Fig. 19 were captured at the Bundeswehr Technical Center for Weapons and Ammunition (WTD 91) in Meppen. There was built up a large, worldwide unique semi-circular arc with a radius of 40 m and a movable carriage for mounting several electro-optical and radar sensors. The sensor elevation angle can be raised from 0 to 90 deg. The center point of the arc is a turntable for the target. Any target aspect angle between 0 and 360 deg is available. The images in Fig. 19 were captured with a nongated high-resolution SWIR camera from Sensors Unlimited. The target was illuminated with a fiber-based, continuous-wave laser diode array with a wavelength of equipped with a top-hat beam shaper. The beam divergence was smaller than the camera field-of-view. So, the square laser spot can be clearly seen in the images in Fig. 19. Due to the low coherence of the laser beam, the images are nearly speckle-free and the target signatures can be also used as pristine inputs for GV image simulation. In the bottom row of Fig. 19, the sensor elevation angle was 20 deg. Thus, these images are relevant for modeling and simulation in airborne scenarios. The entire data collection comprises passive and active signatures in several spectral bands of a large variety of different civilian and military targets with elevation angles up to 60 deg.
In addition to this close-range situation, long-range laser reflectance measurements were also conducted in the maritime domain thanks to the Bundeswehr Technical Center for ships and naval weapons (WTD 71) Surendorf site. In Fig. 20, some sample GV images of a cooperative midsized ship driving in a small circle at a distance of 2 km are shown. The range gate had a length of 150 m and its position was updated according to the actual target range measured by a laser range finder triggered to the illumination laser pulse.
In the most right GV image of Fig. 20, two square reference targets mounted on the ship can be seen. They have an edge length of 1 m. The ship has nearly the same reflectance as the left reference target. All GV images in Fig. 20 are raw images without any target segmentation. However, the fore- and background are completely black due to the specular behavior of the sea surface for a laser beam with a wavelength of . Thus, SWIR GV provides a clear advantage in the maritime domain concerning target segmentation (compare Sec. 3.8).
If a target is located in front of a background with a sufficient large reflectance at the illumination laser wavelength, it is possible to create a silhouette image of the target by setting the range gate of the GV system behind the target to image the background. In certain situations, the silhouette of a target can provide more information than a GV image showing the target with or without the background. In Fig. 21, three situations are shown in which the silhouette image of persons (bottom row) clearly offers more information than the conventional GV image of the person (top row).
For the upper left and upper middle GV image of Fig. 21, short range gates of 50.25 and 21 m were applied, respectively. So, only the persons without background are imaged. In the corresponding silhouette images below, the shape of the hand-held objects can be more easily recognized, e.g., the two ends of the manpad in the middle column. In the right images of Fig. 21, a long range gate of 300 m was applied. So, in the upper right GV image, not only the persons are imaged but also the background. Due to a similar reflectance of the persons and the background at the illumination laser wavelength and due to the very high atmospheric turbulence () and thus strong scintillations at that time in the afternoon, the two persons are merged with the background in the upper GV image. One person can be barely seen and the other one cannot be recognized at all. In the lower image, both persons’ silhouettes can be clearly seen.
In Fig. 22, the comparisons between conventional GV image and silhouette image of two vehicles and a cabin are shown.
In the upper left and upper middle GV image of Fig. 22, one can see that the vehicle and the cabin have a very low laser reflectance at the top half and the roof. The same is true for the vehicle in the upper right image of Fig. 22 for its car wheels. Thus, the silhouette images below significantly simplify target segmentation here.
In a bistatic system configuration (compare Sec. 3.9), i.e., illumination laser and GV camera are considerably spatially separated from each other—there is beside the conventional silhouette also the target shadow at the background visible in the silhouette image. For a planar background, this results in two identical target silhouettes, which are only translated to each other. The GV images in Fig. 23 were captured in a bistatic configuration with a baseline—i.e., distance between laser and camera—of .
While in Fig. 23(a), two persons can be seen, four silhouettes are present in the corresponding right silhouette image—two conventional silhouettes and two geometrically similar shadows. This effect of bistatic system configurations on silhouette images has to be kept in mind when interpreting silhouette images counting persons.
In conclusion, for a well-reflecting background, silhouette imaging can simplify target segmentation if the target has parts with low laser reflectance to indicate the target shape. In bistatic system configurations, this is more complicated due to the additional, geometrically similar shadows present in the silhouette images.
By suitable processing of several GV images with different gate positions, 3-D reconstruction of the observed scene can be achieved. There are mainly two methods for 3-D imaging with GV images. The first one is the so-called sliding gates or tomography technique.3,4,22.214.171.124.–11 Simply by increasing successively the position of the range gate and capturing for each position at least one GV image, the scene is sampled in depth. The fastest way of determining a range image from this sliding gates sequence is to define for each pixel the gate position for which the gray value is maximal as range value. Two examples of this fast 3-D reconstruction of a large and small scale scene are shown in Figs. 24 and 25.
In Fig. 24(b), the fast 3-D reconstruction of an urban scene from the left sliding gates sequence is shown. Due to the changed camera viewing angle, the scene can be fairly good interpreted in terms of distance estimations for situational awareness and mission planning.
In Fig. 25(b), the fast 3-D reconstruction of a ship from the left sliding gates sequence is shown. By comparing the colors with the corresponding range values, a ship length of can be estimated.
A more precise way of determining a range image from the sliding gates sequence is to fit an appropriate continuous function to the pixel gray values depending on the gate position [compare Fig. 27(b)] and to define the gate position of a certain point of this fit—e.g., the center point—as range value. A sample result of this precise 3-D reconstruction of a vehicle is shown in Fig. 26.
For each GV image of the sliding gates sequence in Fig. 26(a), the speckle reduction technique by wavelength diversity (compare Sec. 3.3) with eight different wavelengths was applied. By comparing the colors in the range image of Fig. 26(b) with the corresponding range values, a vehicle length of can be estimated. In a previous work, it has been shown that for this result, high range accuracy was achieved with a range error of only 8 cm.3 Also in the previous papers, the range error for the sliding gates method was extensively investigated in dependence of the number of averaged GV images for each gate position8 and in dependence of the gate shift step size.6 Furthermore, the range error was compared to the results of a 3-D flash LADAR system.9
A more sophisticated technique for 3-D imaging with GV images is the so-called slope method.13 In the GV images of the sliding gate sequence in Fig. 26, one can see at the side of the vehicle that the range gate is not abruptly but slowly rising and falling mainly due to the gate rising and falling time of the EBCMOS GV sensor. By comparing the pixel gray values in this slope region with the maximal possible pixel gray values in the plateau region of the range gate, a relative position within the slope region can be obtained. In Fig. 27, five pixel gray value profiles were determined for different intensities by capturing a sliding gates sequence of well-defined reference targets (compare Fig. 18) at a distance of 95 m with a gate shift step size of 0.75 m and a gate length of 23.5 m.
The slope/plateau transitions in Fig. 27(b) are nearly independent of the intensity. For the five reference targets, one obtains rising slope lengths of 4.3, 4.3, 4.2, 4.2, and 5.5 m; plateau lengths of 10.7, 10.6, 9.8, 9.8, and 9.2 m and falling slope lengths of 8.4, 8.5, 9.3, 9.2, and 9.6 m, respectively. On average, the rising slope has a length of 4.5 m, the plateau has a length of 10 m, and the falling slope has a length of 9 m. In order to cover a larger distance range, it is more convenient to process falling slope and plateau instead of rising slope and plateau. So, 3-D reconstruction of a distance range of 9 m can be achieved. By capturing slope and plateau GV images with a gate position difference of 9 m, the relative position within the slope can be calculated for each pixel byFig. 28, 3-D reconstruction of the scene with the reference targets is performed by this slope method.
In the color-coded range image in Fig. 28(b), one can see that the slope method for 3-D reconstruction provides the same range values for all reference targets independently of the received intensity and is, therefore, a very robust technique. Even for the very low intensity at the right reference target with a laser reflectance of only 5%, the range value is correct. We have implemented this slope method into the GV acquisition software to provide a live 3-D visualization. Alternately, slope and plateau GV images are captured with a gate position difference of 9 m and a gate length of 23.5 m. The maximal frame rate of the GV system is limited by the laser pulse repetition rate of 20 Hz. Thus, alternately processing slope and plateau GV images result in a maximal 3-D frame rate of . In Figs. 29 and 30, 3-D reconstructions by the slope method of a dynamic scene with a cabin and two persons at a distance of 2455 m are shown.
The dynamic scene in Figs. 29 and 30 reveals the great advantage of the slope method for 3-D reconstruction. Despite the fact that the person outside the cabin is moving and jumping into the cabin [Fig. 30(a)], 3-D reconstructions were successful, and one gets a pretty good understanding of what is going on. By looking at the color assignment in the range images, a clear distinction of the position of the moving person can be made: at the same distance as the cabin but outside [orange, Fig. 29(a)]; in front of the cabin [red, Fig. 29(b)]; just jumping into the cabin [orange, Fig. 30(a)] and standing within the cabin [turquoise, Fig. 30(b)]. Additionally in Fig. 29, a second person behind the cabin (blue) can be slightly recognized.
In conclusion, by sliding gates or slope method, a GV system has a high potential to provide 3-D information from at least two GV images in large scale for situational awareness (distance estimation) as well as in small scale for 3-D target reconstruction (input for ATR algorithms).
GV Imaging of Persons at λ = 1.57 μm
Compared to conventional images in the visual spectral band, persons look very differently in SWIR images14 and hence also in GV images based on laser illumination at a wavelength of . On the one hand, the human skin has a very low reflectance of for independently of the ethnicity of the person.15 Even reflectance values of 10%16 and 3% to 5%17 were found. The reason for this low reflectance is the absorption peak at of water within the skin. So, human skin appears nearly black in GV images at . On the other hand, human hairs—head hair as well as beards and eyebrows—appear always bright due to high reflectance at independent of the hair color. In Fig. 31, some sample GV images of head hairs and foreheads under laser illumination at a wavelength of are shown.
In the GV images of Fig. 31, the above characteristics of laser illumination at a wavelength of —dark skin and bright hair—are confirmed. The hands and neck appear also dark at in Fig. 32(b). There is also shown a visual (a) and a MWIR (c) image of the person for comparison.
In the MWIR image of the person in Fig. 32(c), it can be seen that the visible skin parts are—in opposite to the GV image—very bright due to the emission of thermal radiation. In comparison with the visual image in Fig. 32(a), the high reflectance of the clothes in the GV image (b) is noticeable. We have measured the laser reflectance at of a large variety of different clothes—civilian clothes as well as military uniforms.18 In Fig. 33(a), a sample GV image of these measurements with reference targets at the left (compare Sec. 3.4) and three persons with different clothes at the right is depicted.
The laser reflectance of clothes strongly depends on the material of which they consist—cotton, polyester, etc.—and the weave—denim, twill, corduroy, etc. In Fig. 33(b), the two left-most trousers have to be pointed out. Both trousers are black in the visual (a) but they differ significantly in the GV image (b) under laser illumination at a wavelength of . The left one has a very low laser reflectance and appears completely black. If the jacket of left person were fabricated of the same material and weave, the person would appear nearly black and might be completely missed at larger distances. Here, a silhouette image of this person would be beneficial (compare Sec. 3.5). The right one of the two trousers has a very high laser reflectance and appears completely white. Thus, the laser reflectance prediction of unknown clothes is quite difficult.
By the use of optics with a large focal length, small field-of-views of the GV camera and thus a lot of pixels on the person can be achieved. By additionally applying a small range gate and thus suppressing the fore- and the background of the person, a GV image with a high contrast is obtained. In Fig. 34, a collection of GV images of persons at a distance of 600 m performing several actions is shown.
In Fig. 34, a lot of details of the persons can be recognized and a statement concerning the performed action can be made. Hence, a GV system can clearly support police and military forces in observation and reconnaissance missions.
Maritime and Aerial Targets
In sea and air scenarios, maritime and aerial targets are surrounded by water and air, respectively. The water surface has a specular reflection behavior at the laser wavelength of an SWIR GV system; thus, no laser intensity from the water surface reaches directly the GV camera, as can be seen in Fig. 35(a). Here, the sailing boat has a distance of 850 m and looks nearly segmented from the background. However, there are two effects that result in a real or apparent reflection from the water surface. These effects have to be taken into account when interpreting maritime GV images. The first effect is the specular reflection at the water surface of laser photons backscattered from the target. This results in a reflection in the water below the target, which can be seen for the boat at a distance of 600 m in Fig. 35(b). This occurs for calm sea states with low wave heights, so only for stationary or slowly moving targets. Mirage is the second effect and can be seen for the boat at a distance of 3600 m in Fig. 35(c). A vertically flipped image below the target itself is produced. The axis of reflection in the GV image is indicated by a yellow dashed line. This effect occurs at longer distances and depends on the sensor elevation angle, the height-dependent air temperature gradient and the temperature difference between the water surface, and the air directly above the water surface.
Mostly, also in an aerial scenario, only laser photons from the target reach the GV system as it can be seen in Fig. 36. Thus, sea and air scenarios are predestined for the use of a laser range finder detector to measure the target range by the time-of-flight of the illumination laser pulse and to set the position of the range gate to this distance. Especially in the scenarios in Fig. 36, the helicopters and the grenade are flying very fast and a manual setting of the range gate position is not possible anymore.
A great advantage of an active imaging system in sea scenarios is the often different laser reflectance values of the hull of a ship on the one hand and its number or name on the other hand. So mostly, the vessel identification number can be recognized in GV images. In Figs. 37 and 38, GV images of a vessel at different distances together with the magnification of its vessel number are shown. The field-of-view of the GV camera is in Fig. 37 and in Fig. 38. The beam divergence of the illumination laser is always matched to these field-of-views.
In Fig. 37, it can be seen that the maximal range for recognition of the vessel number is 3000 m for a field-of-view of the GV camera of . Using a narrower field-of-view of in Fig. 38, the maximal range can be increased to 6000 m.
To sum up, a GV system has a high potential for target recognition and identification at long ranges in sea and air scenarios (maritime/aerial domain awareness and fine tracking in C-RAM application). Target segmentation is simplified by low background signals with the drawback that no silhouette imaging is possible.
Bistatic GV Imaging
In a bistatic GV system configuration, the illumination laser and GV camera are considerably spatially separated from each other (compare end of Sec. 3.5). From a military point-of-view, a bistatic configuration is of high interest because the detectability of the illumination laser by a foreign laser warning system is always a drawback of an active system. By a laser warning system, the GV camera itself cannot be located, e.g., for dazzling. The angle between the laser beam and the GV camera line-of-sight is called bistatic angle. The larger the bistatic angle, the more of the target shadow at the background is present in the GV image. In Fig. 39, three GV images of persons walking at a distance of 250 m captured by a bistatic GV system are shown.
In Fig. 39(a), no differences compared to a monostatic GV image can be seen due to a short-range gate of 50 m. When the persons cross each other like in Fig. 39(b), an important difference shows up. The shadow line along the shape of the person in the foreground simplifies the separation to the person in the background. In a monostatic configuration, separation would be more difficult due to the very similar laser reflectance values of the person clothes. In Fig. 39(c), the range gate was increased to 200 m. Hence, the background is also imaged and the person shadows at the background can be recognized yielding a pretty good 3-D impression of the scene. The bistatic angle in this configuration was 0.5 deg.
In addition to this shadow effect, retroreflections can be significantly suppressed by a bistatic configuration.4,19 In Fig. 40, GV images of two vehicles at a distance of 600 m simultaneously captured by a mono- and bistatic GV system are compared. The monostatic GV image was captured by the Swedish Defence Research Agency FOI within the framework of a cooperation with the IOSB Predecessor Institute FGAN-FOM.
In Fig. 40(a), strong retroreflections can be seen at the front lights of the left vehicle. In Fig. 40(b), these retroreflections are completely suppressed. Of course, the received laser intensity is much lower because fewer laser photons are reflected to the GV camera under the large bistatic angle of 27 deg. In Ref. 4, some more scenarios are shown in which a bistatic configuration offers advantages compared to a monostatic configuration. A disadvantage of a bistatic configuration with a large bistatic angle is that not all target parts can be imaged. This can be seen in Fig. 40(b). The side of the left vehicle is not illuminated by the laser due to shadowing of the vehicle front.
Nevertheless, GV systems in a bistatic configuration provide some advantages compared to a monostatic configuration including the military aspect that if the illumination laser is detected by a laser warning system, the position of the GV camera and hence the location of the operator are still unknown for a foreign reconnaissance system.
Looking Through Windows
With a GV system operating in the SWIR spectral band, it is possible to look through windows into car interiors. The laser transmission depends on the type of window, e.g., multilayer or insulated glass. In Fig. 41, a GV image of a person sitting in a vehicle with partially opened window is shown.
The person in the vehicle can be recognized through the car glass despite the low laser reflectance of the skin and the jacket. By calculating the mean gray values and in the upper red and lower yellow rectangles, the laser transmission of the window can be estimated by
In the active SWIR images in Fig. 42, it can be clearly discriminated whether a person is sitting in the vehicle at the wheel or not independently of the target aspect angle and the sensor elevation angle.
The active SWIR images at in Fig. 42 were captured with a nongated high-resolution SWIR camera from Sensors Unlimited. The target was illuminated with a fiber-based, continuous-wave laser diode array with a wavelength of equipped with a top-hat beam shaper (compare Sec. 3.4).
So, SWIR GV systems can highly support police and military forces in observation missions to look inside interiors that are separated by glass windows—e.g., car interiors or building—in order to give hints about what is happening inside.
Analytical Range Performance Prediction
In order to predict the performance of a GV system for different tasks—e.g., target detection, recognition, and identification—in dependence of the target distance, an analytical range performance model based on the target task performance metric was developed by the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate.20 Various system, target, and environment parameters are input to the model. Intermediate results are modulation transfer functions (MTF), noise variances, and contrast threshold functions. The final results are range-dependent probability values for fulfilling a certain task specified by the task difficulty parameter . We have implemented this analytical range performance model in MATLAB and extended it to slant-path applications.21 A sample final result is shown in Fig. 43.
The generic GV system, which was modeled in Fig. 43 for a specific environment and target, achieved a 75% detection range of 14 km, a 75% recognition range of 5.1 km, and a 75% identification range of 4.3 km. The main impacts of a slant laser beam propagation path on range performance can be noticed by a significantly reduced atmospheric turbulence effect and by a higher atmospheric transmission compared to a horizontal path. Hence, a larger range is achievable. For modeling the atmospheric turbulence, we have considered two different height profiles of the refractive index structure parameter up to a height of 20 km. The first one is composed of the Kaimal/Walters-Kunkel profile22 for the atmospheric boundary-layer, i.e., up to the inversion layer, and the Hufnagel-Valley 5/7 profile23 for the free atmosphere. The second one was introduced by Kukharets and Tsvang and modified by Murphy.22 Both profiles are depicted in Fig. 44 for an inversion layer at a height of 3000 m.
These profiles are used in the slant-path formulas for the turbulence MTF and the scintillation index. Additional to these turbulence extensions, we have changed the procedure of determining the atmospheric transmission. Instead of using Beer’s Law with a constant atmospheric extinction coefficient, the external software FASCODE (Ontar Corp.,24) is accessed to calculate the atmospheric transmission from the extensive database HiTran for each range step, depending on laser wavelength, sensor height, and target height. Several weather conditions and climes can be chosen.
Given the multitude of experimental results, we were able to validate the analytical range performance model for horizontal paths. Figure 45 shows exemplarily a good accordance between the maximal system range subjectively estimated from real measurements of a vessel (a) and the model results for the corresponding system, environment, and target parameter (b).
Conclusion and Outlook
In the framework of this paper, a review of SWIR laser GV activities in the past decade at Fraunhofer IOSB was presented. This review comprised military and civilian applications in maritime and land domain—in particular vision enhancement in poor visibility, long-range applications, silhouette imaging, 3-D imaging by sliding gates and slope method, bistatic GV imaging, and looking through windows. In addition, theoretical studies that were conducted—such as estimating 3-D accuracy or modeling range performance—were presented.
As future work, we will investigate in-depth the potential of SWIR laser GV for the penetration of pyrotechnic effects in soccer stadiums (compare Sec. 3.1) and perform SWIR laser GV image simulation with experimental verification.
The authors would like to thank the Bundeswehr Technical Center for Weapons and Ammunition (WTD 91) in Meppen and the Bundeswehr Technical Center for Ships and Naval Weapons (WTD 71) site Surendorf for funding and providing infrastructures and targets during several measurement campaigns. Furthermore, the authors express their thanks to Frank Willutzki, Frank van Putten, Simon Brunner, and Richard Frank from Fraunhofer IOSB for their tireless assistance and support in carrying out the numerous experiments cited in this report.
M. Laurenzis et al., “Homogeneous and speckle-free laser illumination for range-gated imaging and active polarimetry,” Opt. Eng. 51(6), 061302 (2012).http://dx.doi.org/10.1117/1.OE.51.6.061302Google Scholar
B. Göhler, P. Lutzmann and G. Anstett, “3D imaging with range gated laser systems using speckle reduction techniques to improve the depth accuracy,” Proc. SPIE 7113, 711307 (2008).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.799740Google Scholar
E. Repasi et al., “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt. 48(31), 5956–5969 (2009).JOAOF81464-4258http://dx.doi.org/10.1364/AO.48.005956Google Scholar
B. Göhler and P. Lutzmann, “Range accuracy of a gated-viewing system as a function of the gate shift step size,” Proc. SPIE 8897, 889708 (2013).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.2029590Google Scholar
B. Göhler and P. Lutzmann, “Range accuracy of a gated-viewing system as a function of the number of averaged images,” Proc. SPIE 8542, 854205 (2012).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.974704Google Scholar
B. Göhler and P. Lutzmann, “Range accuracy of a gated-viewing system compared to a 3-D Flash LADAR under different turbulence conditions,” Proc. SPIE 7835, 783504 (2010).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.865097Google Scholar
M. Laurenzis et al., “3D range-gated imaging at infrared wavelengths with super-resolution depth mapping,” Proc. SPIE 7298, 729833 (2009).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.818428Google Scholar
S. Vyas, A. Banerjee and P. Burlina, “Estimating physiological skin parameters from hyperspectral signatures,” J. Biomed. Opt. 18(5), 057008 (2013).JBOPFO1083-3668http://dx.doi.org/10.1117/1.JBO.18.5.057008Google Scholar
C. C. Cooksey, B. K. Tsai and D. W. Allen, “Spectral reflectance variability of skin and attributing factors,” Proc. SPIE 9461, 94611M (2015).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.2184485Google Scholar
C. Grönwall et al., “Active and passive imaging of clothes in the NIR and SWIR regions for reflectivity analysis,” Appl. Opt. 55(20), 5292–5303 (2016).JOAOF81464-4258http://dx.doi.org/10.1364/AO.55.005292Google Scholar
F. Christnacher et al., “Bistatic range-gated active imaging in vehicles with LEDs or headlights illumination,” Proc. SPIE 7675, 76750J (2010).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.852895Google Scholar
R. L. Espinola et al., “Modeling the target acquisition performance of active imaging systems,” Opt. Express 15(7), 3816–3832 (2007).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.15.003816Google Scholar
B. Göhler and P. Lutzmann, “An analytical performance model for active imaging systems including slant-path applications,” in 4th Int. Symp. on Optronics in Defense and Security (OPTRO ‘10), Paris, France (2010).Google Scholar
R. R. Beland, “Propagation through atmospheric optical turbulence,” in The Infrared and Electro-Optical Systems Handbook, Vol. 2, , J. S. Accetta and D. L. Shumaker Eds., SPIE Optical Engineering Press, Bellingham, Washington (1993).Google Scholar
M. C. Roggemann and B. M. Welsh, Imaging Through Turbulence, CRC Press, Boca Raton, Florida (1996).Google Scholar
Benjamin Göhler received his diploma degree in mathematics from the Technical University of Karlsruhe in 2007. In 2007, he joined the Optronics Department at the Research Institute for Optronics and Pattern Recognition (FOM) in Ettlingen. In 2010, FOM became Fraunhofer IOSB where he is working in the Group Laser Sensors. His research area covers modeling of laser systems and processing of field trial data. He is a member of several bilateral and NATO working groups.
Peter Lutzmann received his diploma degree in physics from the University of Ulm in 1985. In 1985, he joined the Optronics Department at the Research Institute for Optics (FFO) in Tübingen. FFO became FOM in 2000 and Fraunhofer IOSB in 2010. Since 2010, he has been the leader of the Group Laser Sensors. His research area comprises coherent laser radar and laser imaging techniques. He is a member of several bilateral and NATO working groups.