Human automated target recognition (ATR) capability holds important tactical value in military as well as civilian applications. Unmanned systems equipped with real-time human ATR sensors and software will serve to detect potential threats before human forces encounter them. Unattended ground stations may use human ATR in search and rescue applications, to alert rescue teams when help is necessary. The algorithm proposed in this study utilizes infrared imagery to detect people based on the radiance and shape of the human head. The algorithm works in a three-step process of segmentation, feature extraction, and classification. First, the IR image is segmented to reveal only human skin areas (e.g., arms, legs, heads). Next, three morphological features are extracted from each segmented object of interest. Finally, a classifier will use the features to determine whether the object is a head or a nonhead, based on previous algorithmic training. Two types of classifiers were tested in this study: a k-nearest-neighbor classifier and various neural networks. Results show that using a neural network classifier, 97% accuracy in head identification is possible after examining two sequential uncorrelated frames containing the same human head in different views. Tests in a desert environment at nighttime show that the majority of test subjects are detected, with few false positives.
The ability to automatically detect humans in infrared images has value in military and civilian
applications. Robots and unattended ground stations equipped with real-time human ATR capability can operate
as scouts, perform reconnaissance for military units, and serve to locate humans in remote or hazardous sites.
With the algorithm proposed in this study, human targets can be detected in infrared images based on the
structure and radiance of the human head. The algorithm works in a three step process. First, the infrared image
is segmented primarily based on edges and secondarily based on intensity of pixels. Once the regions of interest
have been determined, the segments undergo feature extraction, in which they are characterized based on
circularity and smoothness. The final step of the algorithm uses a k-Nearest Neighbor classifier to match the
segment's features to a database, determining whether the segment is a head or not. This algorithm operates best
in environments in which contrast between the human and the background is high, such as nighttime or indoors.
Tests show that 82% accuracy in identification of human heads is possible for a single still image. After
analyzing two uncorrelated frames viewing the same scene, the likelihood of correctly classifying a human head
that appears in both frames is 97%.
Distance measurement is necessary in a variety of fields, including targeting, surveillance, reconnaissance, robotics, and cartography. Today, the most commonly used method for distance measurement is laser ranging. However, laser rangers being active systems require more energy and cost more than passive systems, and they can be detected by the adversary. Stereoscopic vision, a passive system, requires low levels of power and allows covert operation. This
study considers stereoscopic vision with a compact, portable system, and investigates its essential parameters that can be optimized for accurate distance measurement. The main parameters addressed in this study are the distance between the two cameras, the kernel size used for correlation between the two images, and the quality of the image measured by the standard deviation of pixel values. The distance estimation accuracy is determined as a function of these parameters and the range to target. To represent a compact, portable system, the study considered parallel camera pairs placed 6 inches or 12 inches apart. Using small, visible light digital cameras, the slant range measurement error is less than 3% with 12 inches camera spacing, and a correlation kernel of 30 pixels in width. Larger camera spacing and shorter ranges to target increase the disparity and decrease the distance estimate error. Analytical error predictions explain the experimental results.