Hyperspectral imaging sensors acquire images in a large number of spectral bands, unlike traditional electro-optical and infrared sensors which sample only one or few bands. Hyperspectral mosaic sensors acquire an image of all spectral bands in one shot. Using a patterned array of spectral filters they measure different wavelength bands at different pixel locations, but this comes at the cost of a lower spatial resolution, as the sampling per spectral band is lower. Software algorithms can compensate for this loss in spatial sampling in each spectral channel. Here we compare the image quality obtained with spatial bicubic interpolation and two categories of super resolution algorithms: two single frame super resolution algorithms which exploit spectral redundancies in the data and two multiframe super resolution algorithms which exploit spatio-temporal structure. We make a quantitative assessment of the spatial and spectral image reconstruction quality on synthetic data as well as on semi-synthetic mosaic sensor data for applications in security and medical domains. Our results show that multi frame super resolution provides the best spatial and signal-to-noise quality. The single frame super resolution approaches score lower on spatial sharpness but do provide a substantial improvement compared to mere spatial interpolation, while providing in some cases the best spectral quality.
Intelligent robotic autonomous systems (unmanned aerial/ground/surface/underwater vehicles) are attractive for military application to relieve humans from tedious or dangerous tasks. These systems require awareness of the environment and their own performance to reach a mission goal. This awareness enables them to adapt their operations to handle unexpected changes in the environment and uncertainty in assessments. Components of the autonomous system cannot rely on perfect awareness or actuator execution, and mistakes of one component can affect the entire system. To obtain a robust system, a system-wide approach is needed and a realistic model of all aspects of the system and its environment. In this paper, we present our study on the design and development of a fully functional autonomous system, consisting of sensors, observation processing and behavior analysis, information database, knowledge base, communication, planning processes, and actuators. The system behaves as a teammate of a human operator and can perform tasks independently with minimal interaction. The system keeps the human informed about relevant developments that may require human assistance, and the human can always redirect the system with high-level instructions. The communication behavior is implemented as a Social AI Layer (SAIL). The autonomous system was tested in a simulation environment to support rapid prototyping and evaluation. The simulation is based on the Robotic Operating System (ROS) with fully modelled sensors and actuators and the 3D graphics-enabled physics simulation software Gazebo. In this simulation, various flying and driving autonomous systems can execute their tasks in a realistic 3D environment with scripted or user-controlled threats. The results show the performance of autonomous operation as well as interaction with humans.