For many decades attempts to accomplish Automatic Target Recognition have been made using both visual and FLIR
camera systems. A recurring problem in these approaches is the segmentation problem, which is the separation between
the target and its background.
This paper describes an approach to Automatic Target Recognition using a laser gated viewing system. Here laser-flash
illumination is used in combination with a gating viewer such that only a small part in the distance domain is seen in a
single image. In our approach, using an Intevac LIVAR 4000 imaging system, we combined several images with
different gate settings to construct a 3D data cube. This paper describes the preprocessing and filtering steps taken to
obtain a range image for which pixel values represent the distance between camera and objects in the illuminated scene.
Depth segmentation is performed using the global histogram of this range image. After this depth segmentation very
good 2D object segmentations can be obtained which can be used to classify persons and vehicles. An outlook will be
given towards operational application of this procedure.
In 2007 TNO started to fly some sensors on an unmanned helicopter platform. These sensors included RGB, B/W and
thermal infrared cameras. In 2008 a spectrometer was added. The goal for 2010 is to be able to offer a low altitude flying
platform including several sensors. Development of these sensors will take place the next years. Since the total weight of
the payload should be < 7kg, the weight requirements for the individual sensors will be quite strict. Applications include
gas concentrations, water quality, pipelines, etc. Collaboration still is possible.
Combining the information of several sensor systems is a difficult task. The first steps have been performed in 2007
where RGB and thermal infrared images have been combined together with the coordinates of the platform itself. The
offline data processing includes stitching video images and classification, and correcting for instability of the helicopter
itself. As environmental regulation will become even more strict than today, it is expected that high spatial resolution
sensors that can measure pollution near highways and urban areas, water quality of rivers and lakes, find and track
pollution sources etcetera are key systems in the near future.
In September 2007 and April 2008 flight campaigns have been carried out, demonstrating two applications of the system.
These include the detection of inland salty water, and the detection of benthic diatoms on an estuarine tidal flat. The
results of the two cases are discussed.
A novel knowledge-based multi-agent image interpretation system has been developed which is markedly different from previous approaches in especially its elaborate integration of high-level knowledge-based control with low-level image segmentation algorithms. Each agent in this system is responsible for one type of object and cooperates with other agents to come to a consistent overall image interpretation. Cooperation involves communicating hypotheses and resolving conflicts between individual interpretations. Agents have full control over the underlying segmentation algorithms which they dynamically adapt to the image content given knowledge about global constraints, local information and personal beliefs. The system has been applied to IntraVascular Ultrasound (IVUS)images which are segmented by cooperative agents, specialized in lumen, vessel, calcified- plaque, shadow and sidebranch detection. IVUS image sequences from 7 patients were taken and vessel and lumen contours were detected fully automatically. These were compared with expert-corrected semi-automatic contours. Results show good correlation between agents and observer with r=0.84 for the lumen and r=0.92 for the vessel cross-sectional areas(n=1067). The paired difference between agents and observer was 0.13 ± 2.16 mm2 for vessel,and -0.14 ± 1.01mm2 for lumen cross-sectional areas. These results compare very well with inter-observer variability as reported in the literature.