Translator Disclaimer
1 February 1994 Visualization central to sensor fusion in security systems
Author Affiliations +
Proceedings Volume 2093, Substance Identification Analytics; (1994)
Event: Substance Identification Technologies, 1993, Innsbruck, Austria
When human lives are involved it is vital that the man-machine interface be as informative and accurate as possible. Multiple but non-integrated sensors such as those consisting of two dimensional displays that attempt to depict the contents of packages simply do not transfer sufficient data to security personnel. In questionable circumstances officials are forced to use their sixth sense to determine whether or not a particular package or individual should be detained for further investigation. This intuitive process is fortunately adequate enough in most situations but is dependent on a particular individual's level of training, experience, emotional and physical state. The key to airport security success is to understand how an experienced official charged with safeguarding a particular area fuses the data that is presented to him or her. Once this is done, the cognitive process could be significantly automated and the shortcomings associated with the human element could be substantially eliminated. Simply fusing the output of multiple sensors into a central system and then applying an algorithm does not solve the problem. The speed and accuracy of current sensor fusion and A! techniques lag significantly behind what is available in the human mind. That is not to say the technology is not available. It is the appropriate application of it that has not yet been determined. An approach that would first identify which cues (visual and audible) are most important and useful to security personnel is essential. One answer incorporates a head mounted display (HMID), preferably capable of displaying three dimensional graphics, that is worn by personnel charged with protecting a particular port of entry. Depicted in the wide angle HMD would be all available information from standard as well as test sensors. Data could be displayed in multiple formats using a wide array of presentations to provide the maximum amount of information for both the officer and the researcher. To limit clutter the unit would incorporate two features. The first would be scaling, so that the data could be dynamically modified by a particular user. For instance, television cameras that pan an area may not be considered as important as the X-ray images so they could be made smaller. On the other hand if a particular individual wants to zoom in on a camera image to get a closer look at a suspect, this "window" would be increased. Secondly, a head tracker could be incorporated so that the display appears to be continuous. As the users turn their heads the image continues. This feature would be useful in instances where supporting sensor information was desired because of a cue that was shown in a previous sensor "window". Configuration tests must first be conducted using this system. Preliminaiy studies would show what infonnation to include and what else would be desirable. Central to this concept is the notion that the end user, the security official, is intimately involved in the development loop. Once it is determined what information is used as well as how it is used, the automation of the cognitive process could commence yielding an efficient, automated and fully integrated sensor system.
© (1994) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
C. Gaertner "Visualization central to sensor fusion in security systems", Proc. SPIE 2093, Substance Identification Analytics, (1 February 1994);

Back to Top