PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9079, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A unique and promising intelligent agent plug-in technology for Mission Command Systems— the Warfighter Associate (WA)— is described that enables individuals and teams to respond more effectively to the cognitive challenges of Mission Command, such as managing limited intelligence, surveillance, and reconnaissance (ISR) assets and information sharing in a networked environment. The WA uses a doctrinally-based knowledge representation to model role-specific workflows and continuously monitors the state of the operational environment to enable decision-support, delivering the right information to the right person at the right time. Capabilities include: (1) analyzing combat events reported in chat rooms and other sources for relevance based on role, order-of-battle, time, and geographic location, (2) combining seemingly disparate pieces of data into meaningful information, (3) driving displays to provide users with map based and textual descriptions of the current tactical situation, and (4) recommending courses of action with respect to necessary staff collaborations, execution of battle-drills, re-tasking of ISR assets, and required reporting. The results of a scenario-based human-in-the-loop experiment are reported. The underlying WA knowledge-graph representation serves as state traces, measuring aspects of Soldier decision-making performance (e.g. improved efficiency in allocating limited ISR assets) across runtime as dynamic events unfold on a simulated battlefield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last decade, there has been interest in presenting information fusion solutions to the user and ways to incorporate visualization, interaction, and command and control. In this paper, we explore Decisions-to-Data (D2D) in information fusion design: (1) sensing: from data to information (D2I) processing, (2) reporting: from human computer interaction (HCI) visualizations to user refinement (H2U), and (3) disseminating: from collected to resourced (C2R) information management. D2I supports net-centric intelligent situation awareness that includes processing of information from non-sensor resources for mission effectiveness. H2U reflects that completely automated systems are not realizable requiring Level 5 user refinement for efficient decision making. Finally, C2R moves from immediate data collection to fusion of information over an enterprise (e.g., data mining, database queries and storage, and source analysis for pedigree). By using D2I, H2U, and C2R concepts, they serve as informative themes for future complex information fusion interoperability standards, integration of man and machines, and efficient networking for distribution user situation understanding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S Army Research Laboratory (ARL) has built a “Network Science Research Lab” to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a system architecture aimed at supporting Intelligence, Surveillance, and Reconnaissance (ISR) activities in a Company Intelligence Support Team (CoIST) using natural language-based knowledge representation and reasoning, and semantic matching of mission tasks to ISR assets. We illustrate an application of the architecture using a High Value Target (HVT) surveillance scenario which demonstrates semi-automated matching and assignment of appropriate ISR assets based on information coming in from existing sensors and human patrols operating in an area of interest and encountering a potential HVT vehicle. We highlight a number of key components of the system but focus mainly on the human/machine conversational interaction involving soldiers on the field providing input in natural language via spoken voice to a mobile device, which is then processed to machine-processable Controlled Natural Language (CNL) and confirmed with the soldier. The system also supports CoIST analysts obtaining real-time situation awareness on the unfolding events through fused CNL information via tools available at the Command and Control (C2). The system demonstrates various modes of operation including: automatic task assignment following inference of new high-importance information, as well as semi-automatic processing, providing the CoIST analyst with situation awareness information relevant to the area of operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Onboard analysis of data to determine whether it should be transmitted back to Earth and, if so, at what level of priority is thus highly desirable. This paper presents an algorithm for analyzing image data with regards to several hypotheses about the presence of various objects. This example demonstrates how data can be prioritized based on its relevance in supporting or refuting a scientific thesis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Battlefield monitoring involves collecting streaming data from different sources, transmitting the data over a
heterogeneous network, and processing queries in real-time in order to respond to events in a timely manner.
Nodes in these networks differ with respect to their trustworthiness, processing, storage, and communication
capabilities. Links in the network differ with respect to their communication bandwidth. The topology of the
network itself is subject to change, as the nodes and links may become unavailable. Continuous queries executed
in such environments must also meet some quality of service (QoS) requirements, such as, response time and
throughput. Data streams generated from the various nodes in the network belong to different security levels;
consequently, these must be processed in a secure manner without causing unauthorized leakage or modification.
Towards this end, we demonstrate how an existing complex event processing system can be extended to execute
queries and events in a secure manner in such a dynamic and heterogeneous environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 2006, the US Army Research Laboratory (ARL) and the UK Ministry of Defence (MoD) established a collaborative research alliance with academia and industry, called the International Technology Alliance (ITA)1 In Network and Information Sciences, to address fundamental issues concerning Network and Information Sciences that will enhance decision making for coalition operations and enable rapid, secure formation of ad hoc teams in coalition environments and enhance US and UK capabilities to conduct coalition warfare. Research conducted under the ITA was extended through collaboration between ARL and IBM UK to characterize and dene a software stack and tooling that has become the reference framework for network science experimentation in support for validation of theoretical research. This paper discusses the composition of the reference framework for experimentation resulting from the ARL/IBM UK collaboration and its use, by the Network Science Collaborative Technology Alliance (NS CTA)2 , in a recent network science experiment conducted at ARL. It also discusses how the experiment was modeled using the reference framework, the integration of two new components, the Apollo Fact-Finder3 tool and the Medusa Crowd Sensing4 application, the limitations identified and how they shall be addressed in future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A sophisticated real time architecture for capturing relevant battlefield information of personnel and terrestrial events from a network of mast based imaging and unmanned aerial systems (UAS) with target detection, tracking, classification and visualization is presented. Persistent surveillance of personnel and vehicles is achieved using a unique spatial and temporally invariant motion detection and tracking algorithm for mast based cameras in combination with aerial remote sensing to autonomously monitor unattended ground based sensor networks. UAS autonomous routing is achieved using bio-inspired algorithms that mimic how bacteria locate nutrients in their environment. Results include field test data, performance and lessons learned. The technology also has application to detecting and tracking low observables (manned and UAS), counter MANPADS, airport bird detection and search and rescue operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pedestrian movement along critical infrastructures like pipes, railways or highways, is of major interest in surveillance applications as well as its behavior in urban environment. The goal is to anticipate illicit or dangerous human activities. For this purpose, we propose an all-in-one small autonomous system which delivers high level statistics and reports alerts in specific cases. This situational awareness project leads us to manage efficiently the scene by performing movement analysis. A dynamic background extraction algorithm is developed to reach the degree of robustness against natural and urban environment perturbations and also to match the embedded implementation constraints. When changes are detected in the scene, specific patterns are applied to detect and highlight relevant movements. Depending on the applications, specific descriptors can be extracted and fused in order to reach a high level of interpretation. In this paper, our approach is applied to two operational use cases: pedestrian urban statistics and railway surveillance. In the first case, a grid of prototypes is deployed over a city centre to collect pedestrian movement statistics up to a macroscopic level of analysis. The results demonstrate the relevance of the delivered information; in particular, the flow density map highlights pedestrian preferential paths along the streets. In the second case, one prototype is set next to high speed train tracks to secure the area. The results exhibit a low false alarm rate and assess our approach of a large sensor network for delivering a precise operational picture without overwhelming a supervisor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with ‘Tracking Relay’. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Textron’s Advanced MicroObserver(R) is a next generation remote unattended ground sensor system (UGS) for border security, infrastructure protection, and small combat unit security. The original MicroObserver(R) is a sophisticated seismic sensor system with multi-node fusion that supports target tracking. This system has been deployed in combat theaters. The system’s seismic sensor nodes are uniquely able to be completely buried (including antennas) for optimal covertness. The advanced version adds a wireless day/night Electro-Optic Infrared (EOIR) system, cued by seismic tracking, with sophisticated target discrimination and automatic frame capture features. Also new is a field deployable Gateway configurable with a variety of radio systems and flexible networking, an important upgrade that enabled the research described herein. BattleHawkTM is a small tube launched Unmanned Air Vehicle (UAV) with a warhead. Using transmitted video from its EOIR subsystem an operator can search for and acquire a target day or night, select a target for attack, and execute terminal dive to destroy the target. It is designed as a lightweight squad level asset carried by an individual infantryman. Although BattleHawk has the best loiter time in its class, it’s still relatively short compared to large UAVs. Also it’s a one-shot asset in its munition configuration. Therefore Textron Defense Systems conducted research, funded internally, to determine if there was military utility in having the highly persistent MicroObserver(R) system cue BattleHawk’s launch and vector it to beyond visual range targets for engagement. This paper describes that research; the system configuration implemented, and the results of field testing that was performed on a government range early in 2013. On the integrated system that was implemented, MicroObserver(R) seismic detections activated that system’s camera which then automatically captured images of the target. The geo-referenced and time-tagged MicroObserver(R) target reports and images were then automatically forwarded to the BattleHawk Android-based controller. This allowed the operator to see the intruder (classified and geo-located) on the map based display, assess the intruder as likely hostile (via the image), and launch BattleHawk with the pre-loaded target coordinates. The operator was thus able to quickly acquire the intended target (without a search) and initiate target engagement immediately. System latencies were a major concern encountered during the research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We conducted an experiment to correlate the information gathered by a suite of hard sensors with the information on social networks such as Twitter, Facebook, etc. The experiment consisting of monitoring traffic on a well- traveled road and on a road inside a facility. The sensors suite selected mainly consists of sensors that require low power for operation and last a longtime. The output of each sensor is analyzed to classify the targets as ground vehicles, humans, and airborne targets. The algorithm is also used to count the number of targets belonging to each type so the sensor can store the information for anomaly detection. In this paper, we describe the classifier algorithms used for acoustic, seismic, and passive infrared (PIR) sensor data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the problem of actively learning to distinguish between two sets of anomalous vehicle tracks, innocuous" and suspicious", starting from scratch, without any initial examples of suspicious" and with no prior knowledge of what an operator would deem suspicious. This two-class problem is challenging because it is a priori unknown which track features may characterize the suspicious class. Furthermore, there is inherent imbalance in the sizes of the labeled innocuous" and suspicious" sets, even after some suspicious examples are identified. We present a comprehensive solution wherein a classifier learns to discriminate suspicious from innocuous based on derived p-value track features. Through active learning, our classifier thus learns the types of anomalies on which to base its discrimination. Our solution encompasses: i) judicious choice of kinematic p-value based features conditioned on the road of origin, along with more explicit features that capture unique vehicle behavior (e.g. U-turns); ii) novel semi-supervised learning that exploits information in the unlabeled (test batch) tracks, and iii) evaluation of several classifier models (logistic regression, SVMs). We find that two active labeling streams are necessary in practice in order to have efficient classifier learning while also forwarding (for labeling) the most actionable tracks. Experiments on wide-area motion imagery (WAMI) tracks, extracted via a system developed by Toyon Research Corporation, demonstrate the strong ROC AUC performance of our system, with sparing use of operator-based active labeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Open Standards for Unattended Sensors (OSUS) program, formerly named Terra Harvest, was launched in 2009 to develop an open, integrated battlefield unattended ground sensors (UGS) architecture that ensures interoperability among disparate UGS components and systems. McQ has developed a power managed controller, which is a rugged fielded device that runs an embedded Linux operating system using an open Java software architecture, runs for over 30 days on a small battery pack, and provides various critical functions including the required management, monitoring, and control functions. The OSUS power managed controller system overview, design, and compatibility with other systems will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern Intelligence, Surveillance and Reconnaissance (ISR) systems are increasingly being assembled from autonomous systems, so the resulting ISR system is a System of Systems (SoS). In order to take full advantage of the capabilities of the ISR SoS, the architecture and the design of these SoS should be able to facilitate the benefits inherent in a SoS approach - high resilience, higher level of adaptability and higher diversity, enabling on-demand system composition. The tasks performed by ISR SoS can well go beyond basic data acquisition, conditioning and communication as data processing can be easily integrated in the SoS. Such an ISR SoS can perform data fusion, classification and tracking (and conditional sensor tasking for additional data acquisition), these are extremely challenging tasks in this context, especially if the fusion is performed in a distributed manner. Our premise for the ISR SoS design and deployment is that the system is not designed as a complete system, where the capabilities of individual data providers are considered and the interaction paths, including communication channel capabilities, are specified at design time. Instead, we assume a loosely coupled SoS, where the data needs for a specific fusion task are described at a high level at design time and data providers (i.e., sensor systems) required for a specific fusion task are discovered dynamically at run time, the selection criteria for the data providers being the type and properties of data that can be provided by the specific data provider. The paper describes some of the aspects of a distributed ISR SoS design and implementation, bringing examples on both architectural design as well as on algorithm implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feature extraction algorithms for vehicle classification techniques represent a large branch of Automatic Target Recognition (ATR) efforts. Traditionally, vehicle ATR techniques have assumed time series vibration data collected from multiple accelerometers are a function of direct path, engine driven signal energy. If data, however, is highly dependent on measurement location these pre-established feature extraction algorithms are ineffective. In this paper, we examine the consequences of analyzing vibration data potentially contingent upon transfer path effects by exploring the sensitivity of sensor location. We summarize our analysis of spectral signatures from each accelerometer and investigate similarities within the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper evaluates and expands upon the existing end-to-end process used for vibrometry target classification and identification. A fundamental challenge in vehicle classification using vibrometry signature data is the determination of robust signal features. The methodology used in this paper involves comparing the performance of features taken from automatic speech recognition, seismology, and structural analysis work. These features provide a means to reduce the dimensionality of the data for the possibility of improved separability. The performances of different groups of features are compared to determine the best feature set for vehicle classification. Standard performance metrics are implemented to provide a method of evaluation. The contribution of this paper is to (1) thoroughly explain the time domain and frequency domain features that have been recently applied to the vehicle classification using laser-vibrometry data domain, (2) build an end-to-end classification pipeline for Aided Target Recognition (ATR) with common and easily accessible tools, and (3) apply feature selection methods to the end-to-end pipeline. The end-to-end process used here provides a structured path for accomplishing vibrometry-based target identification. This paper will compare with two studies in the public domain. The techniques utilized in this paper were utilized to analyze a small in-house database of several different vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Continuous classification of dismount types (including gender, age, ethnicity) and their activities (such as walking, running) evolving over space and time is challenging. Limited sensor resolution (often exacerbated as a function of platform standoff distance) and clutter from shadows in dense target environments, unfavorable environmental conditions, and the normal properties of real data all contribute to the challenge. The unique and innovative aspect of our approach is a synthesis of multimodal signal processing with incremental non‐parametric, hierarchical Bayesian machine learning methods to create a new kind of target classification architecture. This architecture is designed from the ground up to optimally exploit correlations among the multiple sensing modalities (multimodal data fusion) and rapidly and continuously learns (online self‐tuning) patterns of distinct classes of dismounts given little a priori information. This increases classification performance in the presence of challenges posed by anti‐access/area denial (A2/AD) sensing. To fuse multimodal features, Long-range Dismount Activity Classification (LODAC) develops a novel statistical information theoretic approach for multimodal data fusion that jointly models multimodal data (i.e., a probabilistic model for cross‐modal signal generation) and discovers the critical cross‐modal correlations by identifying components (features) with maximal mutual information (MI) which is efficiently estimated using non‐parametric entropy models. LODAC develops a generic probabilistic pattern learning and classification framework based on a new class of hierarchical Bayesian learning algorithms for efficiently discovering recurring patterns (classes of dismounts) in multiple simultaneous time series (sensor modalities) at multiple levels of feature granularity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vibration signatures sensed from distant vehicles using laser vibrometry systems provide valuable information that may be used to help identify key vehicle features such as engine type, engine speed, and number of cylinders. Through the use of physics models of the vibration phenomenology, features are chosen to support classification algorithms. Various individual exploitation algorithms were developed using these models to classify vibration signatures into engine type (piston vs. turbine), engine configuration (Inline 4 vs. Inline 6 vs. V6 vs. V8 vs. V12) and vehicle type. The results of these algorithms will be presented for an 8 class problem. Finally, the benefits of using a factor graph representation to link these independent algorithms together will be presented which constructs a classification hierarchy for the vibration exploitation problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The literature is abundant with papers on gender classification research. However the majority of such research is based
on the assumption that there is enough resolution so that the subject’s face can be resolved. Hence the majority of the
research is actually in the face recognition and facial feature area. A gap exists for gender classification under
challenging operating conditions—different seasonal conditions, different clothing, etc.—and when the subject’s face
cannot be resolved due to lack of resolution. The Seasonal Weather and Gender (SWAG) Database is a novel database
that contains subjects walking through a scene under operating conditions that span a calendar year. This paper exploits a
subset of that database—the SWAG One dataset—using data mining techniques, traditional classifiers (ex. Naïve Bayes,
Support Vector Machine, etc.) and traditional (canny edge detection, etc.) and non-traditional (height/width ratios, etc.)
feature extractors to achieve high correct gender classification rates (greater than 85%). Another novelty includes
exploiting frame differentials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.