This paper considers the localization of a point target from an optical sensor's focal plane array (FPA) with a dead zone separating neighboring pixels. The Cramer Rao lower bound (CRLB) for the covariance of the maximum likelihood estimate (MLE) of target location is derived based on the assumptions that the energy density of the target deposited in the FPA conforms to a Gaussian point spread function (PSF) and that the pixel noise is based on a Poisson model (i.e., the mean and variance in each pixel are proportional to the pixel area), . Extensive simulation results are provided to demonstrate the efficiency of the MLE of the target location in the FPA. Furthermore, we investigate how the estimation performance changes with the pixel size for a given dead zone width. It is shown that that there is an optimal pixel size which minimizes the CRLB for a given dead zone width.
In this paper, we demonstrate the use of pupillary measurements as indices of cognitive workload. We analyze the pupillary data of twenty individuals engaged in a simulated Unmanned Aerial System (UAS) operation in order to understand and characterize the behavior of pupil dilation under varying task load (i.e., workload) levels. We present three metrics that can be employed as real-time indices of cognitive workload. In addition, we develop a predictive system utilizing the pupillary metrics to demonstrate cognitive context detection within simulated supervisory control of UAS. Further, we use pupillary data collected concurrently from the left and right eye and present comparative results of the use of separate vs. combined pupillary data for detecting cognitive context.
In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as eﬀective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three diﬀerent diﬃculty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their eﬀectiveness as cognitive classiﬁers. Most of the eye-gaze metrics are computed by dividing the computer screen into “cells”. Then, we perform several analyses in order to select metrics for eﬀective cognitive context classiﬁcation related to our speciﬁc application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classiﬁcation of cognitive features; and (iii) identify a suitable classiﬁcation method.
Some of the conventional metrics derived from gaze patterns (on computer screens) to study visual attention, engagement and fatigue are saccade counts, nearest neighbor index (NNI) and duration of dwells/fixations. Each of these metrics has drawbacks in modeling the behavior of gaze patterns; one such drawback comes from the fact that some portions on the screen are not as important as some other portions on the screen. This is addressed by computing the eye gaze metrics corresponding to important areas of interest (AOI) on the screen. There are some challenges in developing accurate AOI based metrics: firstly, the definition of AOI is always fuzzy; secondly, it is possible that the AOI may change adaptively over time. Hence, there is a need to introduce eye-gaze metrics that are aware of the AOI in the field of view; at the same time, the new metrics should be able to automatically select the AOI based on the nature of the gazes. In this paper, we propose a novel way of computing NNI based on continuous hidden Markov models (HMM) that model the gazes as 2D Gaussian observations (x-y coordinates of the gaze) with the mean at the center of the AOI and covariance that is related to the concentration of gazes. The proposed modeling allows us to accurately compute the NNI metric in the presence of multiple, undefined AOI on the screen in the presence of intermittent casual gazing that is modeled as random gazes on the screen.
The Probabilistic Multi-Hypothesis Tracker (PMHT) was developed in the early 1990s by Roy Streit and Tod Luginbuhl. Since that time many advances and improvements have been made to this elegant algorithm that is linearly efficient in processing as the number of targets, sensors, and clutter increases. This paper documents the many advances to the PMHT by several different contributors over the past two decades. The history continues and looks as promising as ever for this algorithm as we present the latest advancement—the Maximum Likelihood, Histogram Probabilistic Multi-Hypothesis Tracker (ML-HPMHT)—and the exciting results of this potential game-changer in tracking unresolved, dim targets in highly cluttered environments. This new algorithm, which we are calling the Quanta Tracking algorithm, detects and tracks with high accuracy targets that are unresolved in pixels or range bins.
In this paper, we address the problem of passive tracking of multiple targets with the help of images obtained from passive
infrared (IR) platforms. Conventional approaches to this problem, which involve thresholding, measurement detection, data
association and filtering, encounter problems due to target energy being spread across multiple cells of the IR imagery. A
histogram based probabilistic multi-hypothesis tracking (H-PMHT) approach provides an automatic means of modeling
targets that are spread in multiple cells in the imaging sensor(s) by relaxing the need for hard decisions on measurement
detection and data association. Further, we generalize the conventional HPMHT by adding an extra layer of EM iteration
that yields the maximum likelihood (ML) estimate of the number of targets. With the help of simulated focal plane array
(FPA) images, we demonstrate the applicability of MLHPMHT for enumerating and tracking multiple targets.
Target tracking techniques have usually been applied to physical systems via radar, sonar or imaging modalities.
But the same techniques - filtering, association, classification, track management - can be applied to nontraditional
data such as one might find in other fields such as economics, business and national defense. In this
paper we explore a particular data set. The measurements are time series collected at various sites; but other
than that little is known about it. We shall refer to as the data as representing the Megawatt hour (MWH)
output of various power plants located in Afghanistan. We pose such questions as: 1. Which power plants seem to have a common model?
2. Do any power plants change their models with time?
3. Can power plant behavior be predicted, and if so, how far to the future?
4. Are some of the power plants stochastically linked? That is, do we observed a lack of power demand at
one power plant as implying a surfeit of demand elsewhere?
The observations seem well modeled as hidden Markov. This HMM modeling is compared to other approaches;
and tests are continued to other (albeit self-generated) data sets with similar characteristics.
Keywords: Time-series analysis, hidden Markov models, statistical similarity, clustering weighted