Translating environmental knowledge from bird’s eye view perspective, such as a map, to first person egocentric perspective is notoriously challenging, but critical for effective navigation and environment learning. Pointing error, or the angular difference between the perceived location and the actual location, is an important measure for estimating how well the environment is learned. Traditionally, errors in pointing estimates were computed by manually noting the angular difference. With the advent of commercial low-cost mobile eye trackers, it becomes possible to couple the advantages of automated image processing based techniques with these spatial learning studies. This paper presents a vision based analytic approach for calculating pointing error measures in real-world navigation studies relying only on data from mobile eye tracking devices. The proposed method involves three steps: panorama generation, probe image localization using feature matching, and navigation pointing error estimation. This first-of-its-kind application has game changing potential in the field of cognitive research using eye-tracking technology for understanding human navigation and environment learning and has been successfully adopted by cognitive psychologists.
Face recognition technologies have been in high demand in the past few decades due to the increase in human-computer interactions. It is also one of the essential components in interpreting human emotions, intentions, facial expressions for smart environments. This non-intrusive biometric authentication system relies on identifying unique facial features and pairing alike structures for identification and recognition. Application areas of facial recognition systems include homeland and border security, identification for law enforcement, access control to secure networks, authentication for online banking and video surveillance. While it is easy for humans to recognize faces under varying illumination conditions, it is still a challenging task in computer vision. Non-uniform illumination and uncontrolled operating environments can impair the performance of visual-spectrum based recognition systems. To address these difficulties, a novel Anisotropic Gradient Facial Recognition (AGFR) system that is capable of autonomous thermal infrared to visible face recognition is proposed. The main contribution of this paper includes a framework for thermal/fused-thermal-visible to visible face recognition system and a novel human-visual-system inspired thermal-visible image fusion technique. Extensive computer simulations using CARL, IRIS, AT and T, Yale and Yale-B databases demonstrate the efficiency, accuracy, and robustness of the AGFR system.
Although not as popular as fingerprint biometrics, palm prints have garnered interest in scientific community for the rich amount of distinctive information available on the palm. In this paper, a novel method for touchless palm print stitching to increase the effective area is presented. The method is not only rotation invariant but also able to robustly handle many distortions of touchless systems like illumination variations, pose variations etc. The proposed method also can handle partial palmprints, which have a high chance of occurrence in a scene of crime, by stitching them together to produce a much larger-to-full size palmprint for authentication purpose. Experiment results are shown for IIT-D palmprint database, from which pseudo partial palmprints were generated by cropping and randomly rotating them. Furthermore, the quality of stitching algorithm is determined by extensive computer simulations and visual analysis of the stitched image. Experimental results also show that the stitching significantly increases the area of palm image for feature point detection and hence provides a way to increase the accuracy and reliability of detection.
In this paper, a novel technique to mosaic multiview contactless finger images is presented. This technique makes use of different correlation methods, such as, the Alpha-trimmed correlation, Pearson’s correlation , Kendall’s correlation , and Spearman’s correlation , to combine multiple views of the finger. The key contributions of the algorithm are: 1) stitches images more accurately, 2) provides better image fusion effects, 3) has better visual effect on the overall image, and 4) is more reliable. The extensive computer simulations show that the proposed method produces better or comparable stitched images than several state-of-the-art methods, such as those presented by Feng Liu , K Choi , H Choi , and G Parziale . In addition, we also compare various correlation techniques with the correlation method mentioned in  and analyze the output. In the future, this method can be extended to obtain a 3D model of the finger using multiple views of the finger, and help in generating scenic panoramic images and underwater 360-degree panoramas.
Publisher’s Note: This paper, originally published on 25 May 2016, was replaced with a revised version on 16 June 2016. If you downloaded the original PDF, but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
One of the most important areas in biometrics is matching partial fingerprints in fingerprint databases. Recently, significant progress has been made in designing fingerprint identification systems for missing fingerprint information. However, a dependable reconstruction of fingerprint images still remains challenging due to the complexity and the ill-posed nature of the problem. In this article, both binary and gray-level images are reconstructed. This paper also presents a new similarity score to evaluate the performance of the reconstructed binary image. The offered fingerprint image identification system can be automated and extended to numerous other security applications such as postmortem fingerprints, forensic science, investigations, artificial intelligence, robotics, all-access control, and financial security, as well as for the verification of firearm purchasers, driver license applicants, etc.