Traditional event detection from video frames are based on a batch or offline based algorithms: it is assumed that a single event is present within each video, and videos are processed, typically via a pre-processing algorithm which requires enormous amounts of computation and takes lots of CPU time to complete the task. While this can be suitable for tasks which have specified training and testing phases where time is not critical, it is entirely unacceptable for some real-world applications which require a prompt, real-time event interpretation on time. With the recent success of using multiple models for learning features such as generative adversarial autoencoder (GANS), we propose a two-model approach for real-time detection. Like GANs which learns the generative model of the dataset and further optimizes by using the discriminator which learn per sample difference between generated images. The proposed architecture uses a pre-trained model with a large dataset which is used to boost weekly labeled instances in parallel with deep-layers for the small aerial targets with a fraction of the computation time for training and detection with high accuracy. We emphasize previous work on unsupervised learning due to overheads in training labeled data in the sensor domain.
The challenges for providing war fighters with the best possible actionable information from diverse sensing modalities using advances in big-data and machine learning are addressed in this paper. We start by presenting intelligence, surveillance, and reconnaissance (ISR) related big-data challenges associated with the Third Offset Strategy. Current approaches to big-data are shown to be limited with respect to reasoning/understanding. We present a discussion of what meaning making and understanding require. We posit that for human-machine collaborative solutions to address the requirements for the strategy a new approach, Qualia Exploitation of Sensor Technology (QuEST), will be required. The requirements for developing a QuEST theory of knowledge are discussed and finally, an engineering approach for achieving situation understanding is presented.
Despite the advances in face recognition research, current face recognition systems are still not accurate or robust
enough to be deployed in uncontrolled environments. The existence of a pose and illumination invariant face
recognition system is still lacking. This research exploits the relationship between thermal infrared and visible
imagery, to estimate 3D face with visible texture from infrared imagery. The relationship between visible and
thermal infrared texture is learned using kernel canonical correlation analysis(KCCA), and then a 3D modeler is
used to estimate the geometric structure from predicted visual imagery. This research will find it's application
in uncontrolled environments where illumination and pose invariant identification or tracking is required at long
range such as urban search and rescue (Amber alert, missing dementia patient), and manhunt scenarios.