Iris-C is an image codec designed for streaming video applications that demand low bit rate, low latency, lossless image
compression. To achieve compression and low latency the codec features the discrete wavelet transform, Exp-Golomb
coding, and online processes that construct dynamic models of the input video. Like H.264 and Dirac, the Iris-C codec
accepts input video from both the YUV and YCOCG colour spaces, but the system can also operate on Bayer RAW data
read directly from an image sensor. Testing shows that the Iris-C codec is competitive with the Dirac low delay syntax
codec which is typically regarded as the state-of-the-art low latency, lossless video compressor.
Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data.
It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery,
and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation
application that uses image content alone to detect objects of interest, and that automatically establishes and preserves
spatial and temporal relationships between images, cameras and objects. The application features an intuitive user
interface that exposes all images and information generated by the system to an operator thus facilitating the formation
of situational awareness.
The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing key operational needs of the future Canadian Army's Surveillance and Reconnaissance forces by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. We discuss concepts for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as beyond line-of-sight systems such as a mini-UAV and unattended ground sensors. The authors address technical issues associated with the use of fully digital IR and day video cameras and discuss video-rate image processing developed to assist the operator to recognize poorly visible targets. Automatic target detection and recognition algorithms processing both IR and visible-band images have been investigated to draw the operator's attention to possible targets. The machine generated information display requirements are presented with the human factors engineering aspects of the user interface in this complex environment, with a view to establishing user trust in the automation. The paper concludes with a summary of achievements to date and steps to project completion.