Most state-of-the-art techniques focus primarily on daytime surveillance, and limited research has been performed on nighttime monitoring. Surveillance is often more important in darker environments since many activities of interest often occur at night. But, nighttime imagery presents its challenges: they are mostly monochrome and very noisy. Night vision systems are also affected in the presence of a bright light or glare off a shiny surface. Color imagery has several benefits over grayscale imagery. The human eye can discriminate broad spectrum of colors but is limited to about 100 shades of gray. Also, color drives visual attention and aids in better understanding of the scene. Moreover, context information of the scene affects the way humans distinguish and recognize things. The essential step of a coloring process is the choice of an appropriate color image model and color mapping scheme. To enhance relevant information of nighttime images, a color mapping or color transfer technique is employed. The paper proposes a robust pixel-based color transfer architecture that maps the color characteristics of the daytime images to the nighttime images. The architecture is also capable of compensating for image registration issues encountered during acquisition. A visual analysis of the results demonstrate that the proposed method performs better in comparison to the state-of-the-art methods and is robust to different imaging sensors.
Facial emotion recognition technology finds numerous real-life applications in areas of virtual learning, cognitive psychology analysis, avatar animation, neuromarketing, human machine interactions, and entertainment systems. Most state-of-the-art techniques focus primarily on visible spectrum information for emotion recognition. This becomes very arduous as emotions of individuals vary significantly. Moreover, visible images are susceptible to variation in illumination. Low lighting, variation in poses, aging, and disguise have a substantial impact on the appearance of images and textural information. Even though great advances have been made in the field, facial emotion recognition using existing techniques is often not satisfactory when compared to human performance. To overcome these shortcomings, thermal images are preferred to visible images. Thermal images a) are less sensitive to lighting conditions, b) have consistent thermal signatures, and c) have a temperature distribution formed by the face vein branches. This paper proposes a robust emotion recognition system using thermal images- TERNet. To accomplish this, customized convolutional neural network(CNNs) is employed, which possess excellent generalization capabilities. The architecture adopts features obtained via transfer learning from the VGG-Face CNN model, which is further fine-tuned with the thermal expression face data from the TUFTS face database. Computer simulations demonstrate an accuracy of 96.2% when compared to the state-of-the-art models.
In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.
In this paper, a novel technique to mosaic multiview contactless finger images is presented. This technique makes use of different correlation methods, such as, the Alpha-trimmed correlation, Pearson’s correlation , Kendall’s correlation , and Spearman’s correlation , to combine multiple views of the finger. The key contributions of the algorithm are: 1) stitches images more accurately, 2) provides better image fusion effects, 3) has better visual effect on the overall image, and 4) is more reliable. The extensive computer simulations show that the proposed method produces better or comparable stitched images than several state-of-the-art methods, such as those presented by Feng Liu , K Choi , H Choi , and G Parziale . In addition, we also compare various correlation techniques with the correlation method mentioned in  and analyze the output. In the future, this method can be extended to obtain a 3D model of the finger using multiple views of the finger, and help in generating scenic panoramic images and underwater 360-degree panoramas.