PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9474, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management I
The problem of tracking a number of time-varying slow-moving targets in the presence of clutter and false alarms is particularly challenging for the ground moving target indication (GMTI) application. It requires adaptive clutter cancellation techniques such as space-time adaptive processing to deal with the mainbeam clutter. In addition, GMTI radars are also used for generating synthetic aperture radar (SAR) imagery. In this paper, we analysis the performance of the joint probabilistic data association (JPDA) filter for varying coherent processing intervals (CPI) by using experimental airborne radar data with a view towards a more efficient use of GMTI and SAR modes of an airborne AESA radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An airborne EO/IR (electro-optical/infrared) camera system comprises of a suite of sensors, such as a narrow and wide field of view (FOV) EO and mid-wave IR sensors. EO/IR camera systems are regularly employed on military and search and rescue aircrafts. The EO/IR system can be used to detect and identify objects rapidly in daylight and at night, often with superior performance in challenging conditions such as fog. There exist several algorithms for detecting potential targets in the bearing elevation grid. The nonlinear filtering problem is one of estimation of the kinematic parameters from bearing and elevation measurements from a moving platform. In this paper, we developed a complete model for the state of a target as detected by an airborne EO/IR system and simulated a typical scenario with single target with 1 or 2 airborne sensors. We have demonstrated the ability to track the target with `high precision' and noted the improvement from using two sensors on a single platform or on separate platforms. The performance of the Extended Kalman filter (EKF) is investigated on simulated data. Image/video data collected from an IR sensor on an airborne platform are processed using an image tracking by detection algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Particle filters represent the current state of the art in nonlinear, non-Gaussian filtering. They are easy to implement and have been applied in numerous domains. That being said, particle filters can be impractical for problems with state dimensions greater than four, if some other problem specific efficiencies can’t be identified. This “curse of dimensionality” makes particle filters a computationally burdensome approach, and the associated re-sampling makes parallel processing difficult. In the past several years an alternative to particle filters dubbed particle flows has emerged as a (potentially) much more efficient method to solving non-linear, non-Gaussian problems. Particle flow filtering (unlike particle filtering) is a deterministic approach, however, its implementation entails solving an under-determined system of partial differential equations which has infinitely many potential solutions. In this work we apply the filters to angles-only target motion analysis problems in order to quantify the (if any) computational gains over standard particle filtering approaches. In particular we focus on the simplest form of particle flow filter, known as the exact particle flow filter. This form assumes a Gaussian prior and likelihood function of the unknown target states and is then linearized as is standard practice for extended Kalman filters. We implement both particle filters and particle flows and perform numerous numerical experiments for comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a recursion of the probability of target visibility and its applications to analysis of track life and termination in the context of Global Nearest Neighbour (GNN) approach and Probability Hypothesis Density (PHD) filter. In the presence of uncertainties brought about by clutter; decisions to retain a track, terminate it or initialise a new track are based on probability, rather than on distance criterion or estimation error. The visibility concept is introduced into a conventional data-association-oriented multitarget tracker, the GNN; and a random finite set based-tracker, the PHD filter, to take into account instances when targets become invisible or occluded by obstacles. We employ the natural logarithmof the Dynamic Error Spectrum to assess the performance of the trackers with and without probability of visibility incorporated. Simulation results show that the performance of the GNN tracker with visibility concept incorporated is significantly enhanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
In this paper, an approach to bias estimation in the presence of measurement association uncertainty using
common targets of opportunity, is developed. Data association is carried out before the estimation of sensor angle
measurement biases. Consequently, the quality of data association is critical to the overall tracking performance.
Data association becomes especially challenging if the sensors are passive. Mathematically, the problem can
be formulated as a multidimensional optimization problem, where the objective is to maximize the generalized
likelihood that the associated measurements correspond to common targets, based on target locations and sensor
bias estimates. Applying gating techniques significantly reduces the size of this problem. The association
likelihoods are evaluated using an exhaustive search after which an acceptance test is applied to each solution
in order to obtain the optimal (correct) solution. We demonstrate the merits of this approach by applying it to
a simulated tracking system, which consists of two satellites tracking a ballistic target. We assume the sensors
are synchronized, their locations are known, and we estimate their orientation biases together with the unknown
target locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The smooth variable structure filter (SVSF) is a state and parameter estimation strategy based on sliding mode concepts. It has seen significant development and research activity in recent years. In an effort to improve upon the numerical stability of the SVSF, a square-root formulation is derived. The square-root SVSF is based on Potter’s algorithm. The proposed formulation is computationally more efficient and reduces the risks of failure due to numerical instability. The new strategy is applied on target tracking scenarios for the purposes of state estimation. The results are compared with the popular Kalman filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The smooth variable structure filter (SVSF) has seen significant development and research activity in recent years. It is based on sliding mode concepts, which utilizes a switching gain that brings an inherent amount of stability to the estimation process. In this paper, the SVSF is reformulated to present a two-pass smoother based on the SVSF gain. The proposed method is applied on an aerospace flight surface actuator, and the results are compared with the popular Kalman-based two-pass smoother.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop a method for autonomous management of multiple heterogeneous sensors mounted on unmanned aerial vehicles (UAVs) for multitarget tracking. The main contribution of the paper is incorporation of feedback received from intelligence assets (humans) on priorities assigned to specific targets. We formulate the problem as a partially observable Markov decision processes (POMDP) where information received from assets is captured as a penalty on the cost function. The resulting constrained optimization problem is solved using an augmented Lagrangian method. Information obtained from sensors and assets is fused centrally for guiding the UAVs to track these targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the track-to-track association problem. This problem is often a key ingredient when seeking to integrate data from multiple sensors. We propose a probabilistic approach, inspired by the joint probabilistic data association, or JPDA, approach used in the data association problem. To solve the proposed model we adapt a recent deterministic polynomial-time approximation algorithm. We give consideration also to the setting in which one or more sensors may contain biases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications I
Previous research has produced CPHD filters that can detect and track multiple targets in unknown, dynamically changing clutter. The .first such filters employed Poisson clutter generators and, as a result, were combinatorially complex. Recent research has shown that replacing the Poisson clutter generators with Bernoulli clutter generators results in computationally tractable CPHD filters. However, Bernoulli clutter generators are insufficiently complex to model real-world clutter with high accuracy, because they are statistically first-degree. This paper addresses the derivation and implementation of CPHD filters when first-degree Bernoulli clutter generators are replaced by second-degree quadratic clutter generators. Because these filters are combinatorially second-order, they are more easily approximated. They can also be implemented in exact closed form using beta-Gaussian mixture (BGM) or Dirichlet-Gaussian mixture (DGM) techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single- and multi-target tracking are both typically based on strong independence assumptions regarding both the target states and sensor measurements. In particular, both are theoretically based on the hidden Markov chain (HMC) model. That is, the target process is a Markov chain that is observed by an independent observation process. Since HMC assumptions are invalid in many practical applications, the pairwise Markov chain (PMC) model has been proposed as a way to weaken those assumptions. In this paper it is shown that the PMC model can be directly generalized to multitarget problems. Since the resulting tracking filters are computationally intractable, the paper investigates generalizations of the cardinalized probability hypothesis density (CPHD) filter to applications with PMC models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a theoretical approach to the multiagent fusion of multitarget densities based on the information-theoretic concept of Kullback-Leibler Average (KLA). In particular, it is shown how the KLA paradigm is inherently immune to double counting of data. Further, it is shown how consensus can effectively be adopted in order to perform in a scalable way the KLA fusion of multitarget densities over a peer-to-peer (i.e. without coordination center) sensor network. When the multitarget information available in each node can be expressed as a (possibly Cardinalized) Probability Hypothesis Density (PHD), application of the proposed KLA fusion rule leads to a consensus (C)PHD filter which can be successfully exploited for distributed multitarget tracking over a peer-to-peer sensor network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop a distributed cardinalized probability hypothesis density (CPHD) filter that can be deployed in a sensor network to process the measurements of multiple sensors that make conditionally independent measurements. In contrast to the majority of the related work, which involves performing local filter updates and then exchanging data to fuse the local intensity functions and cardinality distributions, we strive to approximate the update step that a centralized multi-sensor CPHD filter would perform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic Target Recognition (ATR) algorithm performance is highly dependent on the sensing conditions under which the input data is collected. Open-loop fly-bys often produce poor results due to less than ideal measurement conditions. In addition, ATR algorithms must be extremely complicated to handle the diverse range of inputs with a resulting reduction in overall performance and increase in complexity. Our approach, closed-loop ATR (CL-ATR), focuses on improving the quality of information input to the ATR algorithms by optimizing motion, sensor settings and team (vehicle-vehicle-human) collaboration to dramatically improve classification accuracy. By managing the data collection guided by predicted ATR performance gain, we increase the information content of the data and thus dramatically improve ATR performance with existing ATR algorithms. CL-ATR has two major functions; first, an ATR utility function, which represents the performance sensitivity of ATR produced classification labels as a function of parameters that correlate to vehicle/sensor states. This utility function is developed off-line and is often available from the original ATR study as a confusion matrix, or it can be derived through simulation without direct access to the inner working of the ATR algorithm. The utility function is inserted into our CLATR framework to autonomously control the vehicle/sensor. Second, an on-board planner maps the utility function into vehicle position and sensor collection plans. Because we only require the utility function on-board, we can activate any ATR algorithm onto a unmanned aerial vehicle (UAV) platform no matter how complex. This pairing of ATR performance profiles with vehicle/sensor controls creates a unique and powerful active perception behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the recent development in the random finite set RFS paradigm in multi-target tracking. Over the last decade the Probability Hypothesis Density filter has become synonymous with the RFS approach. As result the PHD filter is often wrongly used as a performance benchmark for the RFS approach. Since there is a suite of RFS-based multi-target tracking algorithms, benchmarking tracking performance of the RFS approach by using the PHD filter, the cheapest of these, is misleading. Such benchmarking should be performed with more sophisticated RFS algorithms. In this paper we outline the high-performance RFS-based multi-target trackers such that the Generalized Labled Multi-Bernoulli filter, and a number of efficient approximations and discuss extensions and applications of these filters. Applications to space situational awareness are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications II
We prove a theorem that guarantees the existence of a particle flow corresponding to Bayes’ rule, assuming certain regularity conditions (smooth and nowhere vanishing probability densities). This theory applies to particle flows to compute Bayes’ rule for nonlinear filters, Bayesian decisions and learning as well as transport. The particle flow algorithms reduce computational complexity by orders of magnitude compared with standard Markov chain Monte Carlo (MCMC) algorithms that achieve the same accuracy for high dimensional problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a baker’s dozen of new particle flows to compute Bayes’ rule for nonlinear filters, Bayesian decisions and learning as well as transport. Several of these new flows were inspired by transport theory, but others were inspired by physics or statistics or Markov chain Monte Carlo methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications III
This paper introduces a method to integrate target behavior into the multiple hypothesis tracker (MHT) likelihood ratio. In particular, a periodic track appraisal based on behavior is introduced that uses elementary topological data analysis coupled with basic machine learning techniques. The track appraisal adjusts the traditional kinematic data association likelihood (i.e., track score) using an established formulation for classification-aided data association. The proposed method is tested and demonstrated on synthetic vehicular data representing an urban traffic scene generated by the Simulation of Urban Mobility package. The vehicles in the scene exhibit different driving behaviors. The proposed method distinguishes those behaviors and shows improved data association decisions relative to a conventional, kinematic MHT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Oculus Sea is a complete solution regarding maritime surveillance and communications at Local as well as Central Command and Control level. It includes a robust and independent track fusion service whose main functions include: 1) Interaction with the User to suggest the fusion of two or more tracks, confirm Track ID and Vessel Metadata creation for the fused track, and suggest de-association of two tracks 2) Fusion of same vessel tracks arriving simultaneously from multiple radar sensors featuring track Association, track Fusion of associated tracks to produce a more accurate track, and Multiple tracking filters and fusion algorithms 3) Unique Track ID Generator for each fused track 4) Track Dissemination Service. Oculus Sea Track Fusion Service adopts a system architecture where each sensor is associated with a Kalman estimator/tracker that obtains an estimate of the state vector and its respective error covariance matrix.
Finally, at the fusion center, association and track state estimation fusion are carried out. The expected benefits of this
system include multi-sensor information fusion, enhanced spatial resolution, and improved target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
OCULUS Sea™ is a C2 platform for Integrated Maritime Surveillance. The platform consists of “loosely coupled” National/ Regional and Local C2 Centers which are “centrally governed”. “Loosely coupled” as C2 Centers are located separately, share their Situational Pictures via a Message Oriented Middleware but preserve their administrational and operational autonomy. “Centrally governed” as there exists a central governance mechanism at the NCC that registers, authenticates and authorizes Regional and Local C2 centers into the OCULUS Sea network. From operational point of view, OCULUS Sea has been tested under realistic conditions during the PERSEUS [3] Eastern Campaign and has been positively evaluated by Coast Guard officers from Spain and Greece. From Research and Development point of view, OCULUS Sea can act as a test bed for validating any technology development is this domain in the near future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm is proposed here that overcomes the problem of the not exhaustive number of common measures from sensors
of different kind. In the presence of a suite of heterogeneous sensors, the data fusion process has to deal with the problem
of managing different information, generally not directly comparable. The analysis of the mathematical model is carried
out considering a data fusion system between radar and Infrared Search and Track (IRST) where the measurement of the
range is achieved by radar only. Simulation results demonstrate the effectiveness of the algorithm as regards the fusion
process, tracking and correctness of association among tracks from different sensors. A comparison with a known approach
from the literature about the fusion equation is also performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster’s rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster’s rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the plethora of information, there are many aspects to contested environments such as the protection of information, network privacy, and restricted observational and entry access. In this paper, we review and contrast the perspectives of challenges and opportunities for future developments in contested environments. The ability to operate in a contested environment would aid societal operations for highly congested areas with limited bandwidth such as transportation, the lack of communication and observations after a natural disaster, or planning for situations in which freedom of movement is restricted. Different perspectives were presented, but common themes included (1) Domain: targets and sensors, (2) network: communications, control, and social networks, and (3) user: human interaction and analytics. The paper serves as a summary and organization of the panel discussion as towards future concerns for research needs in contested environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A classification system with M possible output labels (or decisions) will have M(M-1) possible errors. The Receiver Operating Characteristic (ROC) manifold was created to quantify all of these errors. When multiple classification systems are fused, the assumption of independence is usually made in order to combine the individual ROC manifolds for each system into one ROC manifold. This paper will investigate the label fusion (also called decision fusion) of multiple classification system families (CSF) that have the same number of output labels. Boolean rules do not exist for multiple symbols, thus, we will derive Boolean-like rules as well as other rules that will yield label fusion rules. An M-label system will have M! consistent rules. The formula for the resultant ROC manifold of the fused classification system family which incorporates the individual classification system families will be derived. Specifically, given a label rule and two classification system families, the ROC manifold for the fused family is produced. We generate the formula for the Boolean-like AND ruled and give the resultant ROC manifold for the fused CSF. We show how the set of permutations of the label set is used to generate all of the consistent rules and how the permutation matrix is incorporated into a single formula for the ROC manifold. Examples will be given that demonstrate how each formula is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers a distributed filtering problem over a multi-sensor network in which the correlation of local estimation errors is unknown. Recently, this problem was studied by G. Battistelli [1] by developing a data fusion rule to calculate the weighted Kullback-Leibler average of local estimates with consensus algorithms for distributed averaging, where the weighted Kullback-Leibler average is defined as an averaged probability density function to minimize the sum of weighted Kullback-Leibler divergences from the original probability density functions. In this paper, we extends those earlier results by relaxing the prior assumption that all sensors share the same degree of confidence. Furthermore, a novel consensus-based distributed weighting coefficients selection scheme is developed to improve the fusion accuracy, where the weight associated with each sensor is adjusted based on the local estimation error covariance and the ones received from neighboring sensors, so that larger weight values will be assigned to a sensor with higher degree of confidence. Finally, a Monte-Carlo simulation with a 2D tracking system validates the effectiveness of the proposed distributed filtering algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper contains preliminary steps in demonstrating how the Dempster Shafer theory can be placed into the framework of category theory. In the Dempster Shafer setting, the elements of the base set of a probability space are, typically, subsets of some set. Consequently, the elements of the corresponding sigma algebra are not subsets of a set, but rather, subsets of subsets of a set. A probability function, in this case, no longer has the classical meaning. This situation lends itself to the more general notions of inner and outer measures, which Shafer calls belief and plausibility, respectively. The categorical approach attempts to unify classical and non-classical concepts into a setting, so that, depending on the nature of the stochastic problem at hand, a general framework may be specialized appropriately to attack the problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications I
Object representation is fundamental to Automated Target Recognition (ATR). Many ATR approaches choose a basis, such as a wavelet or Fourier basis, to represent the target. Recently, advancements in Image and Signal processing have shown that object recognition can be improved if, rather than a assuming a basis, a database of training examples is used to learn a representation. We discuss learning representations using Non-parametric Bayesian topic models, and demonstrate how to integrate information from other sources to improve ATR. We apply the method to EO and IR information integration for vehicle target identification and show that the learned representation of the joint EO and IR information improves target identification by 4%. Furthermore, we demonstrate that we can integrate text and imagery data to direct the representation for mission specific tasks and improve performance by 8%. Finally, we illustrate integrating graphical models into representation learning to improve performance by 2%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Historically, visual search experiments used artificial stimuli (simple shapes) as targets and distractors arranged in an imaginary array of cells on a blank background. Little research on search behavior has been conducted with naturalistic stimuli and a frequency-domain framework. With the common metric provided by Fourier analysis, it is possible to compare the effects of various frequency-domain components on search efficiency.1 In the current study, we experimentally manipulated the spectral content of target and distractor (background) cells filled with spatially filtered segments of real-life scenes. Our experimental design included two types of spatial filters, orientation (some frequency overlap between target and distractor) and spatial frequency (no overlap), and uniform distractor (target and distractors filtered similarly) and mixed distractor (only half distractors filtered like the target) conditions. Generally, observers found the target more quickly and were more confident in their performance in the mixed condition. Observers were faster, more accurate, and more confident in the spatial filter condition than in the orientation filter condition. Overall, observers spent less time (fixation duration) and effort (fixation frequency) examining dissimilar distractors. The effect with the fixation frequency measure was magnified when the spatial frequency filter was used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A unified model-based approach to ATR that uses 3D models to control detection, segmentation, and classification is described. Objects are modeled by rectangular boxes whose dimensions are Gaussian random variables. A fast predictor estimates the size and shape of expected objects in the image, which controls detection and segmentation algorithms. Segmentation fits oriented rectangles (length x width @ pose) to object-like regions detected using a multi‐level thresholding/region tracking approach. Detections are classified by comparing measured to predicted region length and width in the pose direction. The method is fast and requires only a coarse characterization of objects/object classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications II
Reliable, low-cost and simple methods for assessment of signature properties for military purposes are very important. In this paper we present such an approach that uses human observers in a search by photo assessment of signature properties of generic test targets. The method was carried out by logging a large number of detection times of targets recorded in relevant terrain backgrounds. The detection times were harvested by using human observers searching for targets in scene images shown by a high definition pc screen. All targets were identically located in each “search image”, allowing relative comparisons (and not just rank by order) of targets. To avoid biased detections, each observer only searched for one target per scene. Statistical analyses were carried out for the detection times data. Analysis of variance was chosen if detection times distribution associated with all targets satisfied normality, and non-parametric tests, such as Wilcoxon’s rank test, if otherwise. The new methodology allows assessment of signature properties in a reproducible, rapid and reliable setting. Such assessments are very complex as they must sort out what is of relevance in a signature test, but not loose information of value. We believe that choosing detection times as the primary variable for a comparison of signature properties, allows a careful and necessary inspection of observer data as the variable is continuous rather than discrete. Our method thus stands in opposition to approaches based on detections by subsequent, stepwise reductions in distance to target, or based on probability of detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face images are an important source of information for biometric recognition and intelligence gathering. While face
recognition research has made significant progress over the past few decades, recognition of faces at extended ranges is
still highly problematic. Recognition of a low-resolution probe face image from a gallery database, typically containing
high resolution facial imagery, leads to lowered performance than traditional face recognition techniques. Learning and
super-resolution based approaches have been proposed to improve face recognition at extended ranges; however, the
resolution threshold for face recognition has not been examined extensively. Establishing a threshold resolution
corresponding to the theoretical and empirical limitations of low resolution face recognition will allow algorithm
developers to avoid focusing on improving performance where no distinguishable information for identification exists in
the acquired signal. This work examines the intrinsic dimensionality of facial signatures and seeks to estimate a lower
bound for the size of a face image required for recognition. We estimate a lower bound for face signatures in the visible
and thermal spectra by conducting eigenanalysis using principal component analysis (PCA) (i.e., eigenfaces approach).
We seek to estimate the intrinsic dimensionality of facial signatures, in terms of reconstruction error, by maximizing the
amount of variance retained in the reconstructed dataset while minimizing the number of reconstruction components.
Extending on this approach, we also examine the identification error to estimate the dimensionality lower bound for low-resolution
to high-resolution (LR-to-HR) face recognition performance. Two multimodal face datasets are used for this
study to evaluate the effects of dataset size and diversity on the underlying intrinsic dimensionality: 1) 50-subject
NVESD face dataset (containing visible, MWIR, LWIR face imagery) and 2) 119-subject WSRI face dataset (containing
visible and MWIR face imagery).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to accurately and rapidly know the precise location of enemy fire would be a substantial capability enhancement to the dismounted soldier. Acoustic gun-shot detections systems can provide an approximate bearing but it is desired to precisely know the location (direction and range) of enemy fire; for example to know from ‘which window’ the fire is coming from. Funded by the UK MOD (via Roke Manor Research) QinetiQ is developing an imaging solution built around an InGaAs camera. This paper presents work that QinetiQ has undertaken on the Muzzle Flash Locator system. Key technical challenges that have been overcome are explained and discussed in this paper. They include; the design of the optical sensor and processing hardware to meet low size, weight and power requirements; the algorithm approach required to maintain sensitivity whilst rejecting false alarms from sources such as close passing insects and sun glint from scene objects; and operation on the move. This work shows that such a sensor can provide sufficient sensitivity to detect muzzle flash events to militarily significant ranges and that such a system can be combined with an acoustic gunshot detection system to minimize the false alarm rate. The muzzle flash sensor developed in this work operates in real-time and has a field of view of approximately 29° (horizontal) by 12° (vertical) with a pixel resolution of 0.13°. The work has demonstrated that extension to a sensor with realistic angular rotation rate is feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Obtaining location information can be of paramount importance in the context of pervasive and context-aware computing applications. Many systems have been proposed to date, e.g. GPS that has been proven to offer satisfying results in outdoor areas. The increased effect of large and small scale fading in indoor environments, however, makes localization a challenge. This is particularly reflected in the multitude of different systems that have been proposed in the context of indoor localization (e.g. RADAR, Cricket etc). The performance of such systems is often validated on vastly different test beds and conditions, making performance comparisons difficult and often irrelevant. The Locus analytical framework incorporates algorithms from multiple disciplines such as channel modeling, non-uniform random number generation, computational geometry, localization, tracking and probabilistic modeling etc. in order to provide: (a) fast and accurate signal propagation simulation, (b) fast experimentation with localization and tracking algorithms and (c) an in-depth analysis methodology for estimating the performance limits of any Received Signal Strength localization system. Simulation results for the well-known Fingerprinting and Trilateration algorithms are herein presented and validated with experimental data collected in real conditions using IEEE 802.15.4 ZigBee modules. The analysis shows that the Locus framework accurately predicts the underlying distribution of the localization error and produces further estimates of the system’s performance limitations (in a best-case/worst-case scenario basis).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications III
Vision based fall detection systems must often contend with more issues than the need to simply identify true fall cases. All vision systems have areas of the frame they cannot see, occlusion, and this becomes of critical importance for systems monitoring for falls. Even with full scene visibility, human falls have an incredible variety requiring special detectors for edge cases like partial falls. Each detection algorithm is only as good as the parameters it is provided and so optimum values for each detector are found using Particle Swarm Optimization. We then discuss the use of email and short message service (SMS) in alerting caregivers that a fall has occurred.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design is intended for aircraft although any vehicle or even a man-mobile system could use the concept. An automatically reconfigurable antenna using MEMS RF switches is driven to seek signals consistent with the current location of the system. The antenna feeds a Software Defined Radio (SDR) that scans for signals and when a signal is found, it is identified and then the azimuth to the signal is used, along with a signal strength parameter, to confirm the location of the system. This is an extension of the now obsolete Automatic Direction Finder (ADF) aircraft navigation tool that used AM broadcast non-directional beacons (NDB), many of which are still in service. The current system can access any radio signal within the limits of the reconfigurable antenna and the SDR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our previous studies, vehicle surfaces’ vibrations caused by operating engines measured by Laser Doppler Vibrometer (LDV) have been effectively exploited in order to classify vehicles of different types, e.g., vans, 2-door sedans, 4-door sedans, trucks, and buses, as well as different types of engines, such as Inline-four engines, V-6 engines, 1-axle diesel engines, and 2-axle diesel engines. The results are achieved by employing methods based on an array of machine learning classifiers such as AdaBoost, random forests, neural network, and support vector machines. To achieve effective classification performance, we seek to find a more reliable approach to pick authentic vibrations of vehicle engines from a trustworthy surface. Compared with vibrations directly taken from the uncooperative vehicle surfaces that are rigidly connected to the engines, these vibrations are much weaker in magnitudes. In this work we conducted a systematic study on different types of objects. We tested different types of engines ranging from electric shavers, electric fans, and coffee machines among different surfaces such as a white board, cement wall, and steel case to investigate the characteristics of the LDV signals of these surfaces, in both the time and spectral domains. Preliminary results in engine classification using several machine learning algorithms point to the right direction on the choice of type of object surfaces to be planted for LDV measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vibrometry offers the potential to classify a target based on its vibration spectrum. Signal processing is necessary for extracting features from the sensing signal for classification. This paper investigates the effects of fundamental frequency normalization on the end-to-end classification process [1]. Using the fundamental frequency, assumed to be the engine’s firing frequency, has previously been used successfully to classify vehicles [2, 3]. The fundamental frequency attempts to remove the vibration variations due to the engine’s revolution per minute (rpm) changes. Vibration signatures with and without fundamental frequency are converted to ten features that are classified and compared. To evaluate the classification performance confusion matrices are constructed and analyzed. A statistical analysis of the features is also performed to determine how the fundamental frequency normalization affects the features. These methods were studied on three datasets including three military vehicles and six civilian vehicles. Accelerometer data from each of these data collections is tested with and without normalization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present two signal processing algorithms implemented using the FPGA. The first algorithm involves explicate time gating of received signals that correspond to a desired spatial resolution, performing a Fast Fourier Transform (FFT) calculation on each individual time gate, taking the square modulus of the FFT to form a power spectrum and then accumulating these power spectra for 10k return signals. The second algorithm involves calculating the autocorrelation of the backscattered signals and then accumulating the autocorrelation for 10k pulses. Efficient implementation of each of these two signal processing algorithms on an FPGA is challenging because it requires there to be tradeoffs between retaining the full data word width, managing the amount of on chip memory used and respecting the constraints imposed by the data width of the FPGA. A description of the approach used to manage these tradeoffs for each of the two signal processing algorithms are presented and explained in this article. Results of atmospheric measurements obtained through these two embedded programming techniques are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An obstacle detection method for unmanned ground vehicle in outdoor environment is proposed. The proposed method uses range data acquired by laser range finders (LRFs) and FMCW radars. LRFs and FMCW radars are used for distinguishing ground and obstacles on uneven terrain, and for detecting obstacles in dusty environment. The proposed obstacle detection algorithm is broadly composed of three steps: 1) 1D virtual range data generation which ground information is removed by range data of LRFs, 2) 1D virtual range data generation acquired by fusion of multiple FMCW radars, 3) 1D virtual range data generation which dust information is removed by fusion of step 1) and step 2). The proposed method is verified by real experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a framework for a denormalized data ingestion and egress method which can be used among several types of in-space devices. Open Space Box is a novel model of communication that supports the data processing required to transform this data into products for utilization by the requesting stakeholders. One such set of data is the data that could be generated from a space-based 3D scanner. We provide an overview of 3D scanning technologies and discuss the storage/transmission needs and types of data generated by an optical 3D scanner developed and deployed at the University of North Dakota. Prospective usage patterns are discussed, as might be applicable to its use for regularly assessing astronauts' health and performing scientific experiments. Communication of this sort of data may be critical to the health of astronauts and to scientific mission goals and ongoing operations. Some of this data may be processed even before it is transmitted to the attending healthcare provider or requesting scientist. Given that the data communications to and from a spacecraft or space station happens at a limited rate and the massive amounts of data that could be generated from 3D scanning, the receiver or egress application to prepare data needs to be designed to transmit and receive data via an application with a flexible protocol base.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Driver distraction could result in safety compromises attributable to distractions from in-vehicle equipment usage [1]. The effective design of driver-vehicle interfaces (DVIs) and other human-machine interfaces (HMIs) together with their usability, and accessibility while driving become important [2]. Driving distractions can be classified as: visual distractions (any activity that takes your eyes away from the road), cognitive distraction (any activity that takes your mind away from the course of driving), and manual distractions (any activity that takes your hands away from the steering wheel [2]). Besides, multitasking during driving is a distractive activity that can increase the risks of vehicular accidents. To study the driver’s behaviors on the safety of transportation system, using an in-vehicle driver notification application, we examined the effects of increasing driver distraction levels on the evaluation metrics of traffic efficiency and safety by using two types of driver models: young drivers (ages 16-25 years) and middle-age drivers (ages 30-45 years). Our evaluation data demonstrates that as a drivers distraction level is increased, less heed is given to change route directives from the in-vehicle on-board unit (OBU) using textual, visual, audio, and haptic notifications. Interestingly, middle-age drivers proved more effective/resilient in mitigating the negative effects of driver distraction over young drivers [2].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned aerial systems (UAS) are becoming increasingly popular in industry, military, and social environments. An UAS that provides good operating performance and robustness to disturbances is often quite expensive and prohibitive to the general public. To improve UAS performance without affecting the overall cost, an estimation strategy can be implemented on the internal controller. The use of an estimation strategy or filter reduces the number of required sensors and power requirement, and improves the controller performance. UAS devices are highly nonlinear, and implementation of filters can be quite challenging. This paper presents the implementation of the relatively new cubature smooth variable structure filter (CSVSF) on a quadrotor controller. The results are compared with other state and parameter estimation strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.