Recent breakthroughs in EO/IR sensing, real-time signal processing, and deep machine learning technologies have enabled standoff heart rate estimation from facial and body video. This technology is also known as remote photoplethysmography (rPPG). Research and development of rPPG has attracted much attention recently. This paper gives a timely review of this fast-paced field to give the researcher, engineer, and graduate student a quick grasp of the recent advancement of rPPG. We first review two rPPG design approaches: color variation based and motion-based detections. To enable rPPG for less constrained use cases, various signal processing and machine learning algorithms are developed to handle signal variabilities introduced by lighting source, view angle, and subject motion. To help newcomers quickly start work in this field, we then describe some existing rPPG research datasets, open-source rPPG research tools, and some demonstration systems. Six commonly used rPPG algorithm evaluation metrics are described to evaluate and visualize the research advance in this field. As the rPPG technology matures, more application domains become possible. We cover six applications of rPPG in commercial, security, and defense domains, including emerging applications in biometric liveness and video media authenticity. Finally, we outline some challenges yet to overcome, especially in the domain of security and defense. These challenges include unconstrained outdoor environment, rPPG form air-platform, night time operation, moving and non-cooperative subjects. These challenges require special algorithmic considerations.
Proc. SPIE. 10993, Mobile Multimedia/Image Processing, Security, and Applications 2019
KEYWORDS: Mobile devices, Visual process modeling, Data modeling, Field programmable gate arrays, Clouds, Quantization, Machine learning, Artificial intelligence, Performance modeling, Information operations
Recent breakthroughs in deep learning and artificial intelligence technologies have enabled numerous mobile applications. While traditional computation paradigms rely on mobile sensing and cloud computing, deep learning implemented on mobile devices provides several advantages. These advantages include low communication bandwidth, small cloud computing resource cost, quick response time, and improved data privacy. Research and development of deep learning on mobile and embedded devices has recently attracted much attention. This paper provides a timely review of this fast-paced field to give the researcher, engineer, practitioner, and graduate student a quick grasp on the recent advancements of deep learning on mobile devices. In this paper, we discuss hardware architectures for mobile deep learning, including Field Programmable Gate Arrays (FPGA), Application Specific Integrated Circuit (ASIC), and recent mobile Graphic Processing Units (GPUs). We present Size, Weight, Area and Power (SWAP) considerations and their relation to algorithm optimizations, such as quantization, pruning, compression, and approximations that simplify computation while retaining performance accuracy. We cover existing systems and give a state-of-the-industry review of TensorFlow, MXNet, Mobile AI Compute Engine (MACE), and Paddle-mobile deep learning platform. We discuss resources for mobile deep learning practitioners, including tools, libraries, models, and performance benchmarks. We present applications of various mobile sensing modalities to industries, ranging from robotics, healthcare and multimedia, biometrics to autonomous drive and defense. We address the key deep learning challenges to overcome, including low quality data, and small training/adaptation data sets. In addition, the review provides numerous citations and links to existing code bases implementing various technologies. These resources lower the user’s barrier to entry into the field of mobile deep learning.
The ubiquity of mobile devices offers the opportunity to exploit device-generated signal data for biometric identification, health monitoring, and activity recognition. In particular, mobile devices contain an Inertial Measurement Unit (IMU) that produces acceleration and rotational rate information from the IMU accelerometers and gyros. These signals reflect motion properties of the human carrier. It is well-known that the complexity of bio-dynamical systems gives rise to chaotic dynamics. Knowledge of chaotic properties of these systems has shown utility, for example, in detecting abnormal medical conditions and neurological disorders. Chaotic dynamics has been found, in the lab, in bio-dynamical systems data such as electrocardiogram (heart), electroencephalogram (brain), and gait data. In this paper, we investigate the following question: can we detect chaotic dynamics in human gait as measured by IMU acceleration and gyro data from mobile phones? To detect chaotic dynamics, we perform recurrence analysis on real gyro and accelerometer signal data obtained from mobile devices. We apply the delay coordinate embedding approach from Takens' theorem to reconstruct the phase space trajectory of the multi-dimensional gait dynamical system. We use mutual information properties of the signal to estimate the appropriate delay value, and the false nearest neighbor approach to determine the phase space embedding dimension. We use a correlation dimension-based approach together with estimation of the largest Lyapunov exponent to make the chaotic dynamics detection decision. We investigate the ability to detect chaotic dynamics for the different one-dimensional IMU signals, across human subject and walking modes, and as a function of different phone locations on the human carrier.
Real time wavefront control for adaptive laser communication and imaging system requires fast measurement of image quality.
Statistical analysis of speckle field provides effective image quality criteria for adaptive correction of phase-distorted images. We propose an analog continuous time VLSI (very-large-scale-integration) spectrum analysis chip to provide such a real time image quality measurement. The chip takes the signal sensed by a photo detector which is located in the speckle field as analog input and computes its spectrum distribution continuously. Experiment and analysis on distorted laser beam was conducted with the analog spectrum analysis chip. Target-in-the-loop system is under development to demonstrate the capability of real time adaptive imaging
Conference Committee Involvement (3)
Multimodal Image Exploitation and Learning 2022
3 April 2022 | Orlando, Florida, United States
Multimodal Image Exploitation and Learning 2021
12 April 2021 | Online Only, Florida, United States
Mobile Image Exploitation and Learning 2020
27 April 2020 | Online Only, California, United States