PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8747, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geo-registration and Uncertainty Handling in Geospatial Data
Whether statistically representing the errors in the estimates of sensor metadata associated with a set of images, or statistically representing the errors in the estimates of 3D location associated with a set of ground points, the corresponding “full” multi-state vector error covariance matrix is critical to exploitation of the data. For sensor metadata, the individual state vectors typically correspond to sensor position and attitude of an image. These state vectors, along with their corresponding full error covariance matrix, are required for optimal down-stream exploitation of the image(s), such as for the stereo extraction of a 3D target location and its corresponding predicted accuracy. In this example, the full error covariance matrix statistically represents the sensor errors for each of the two images as well as the correlation (similarity) of errors between the two images. For ground locations, the individual state vectors typically correspond to 3D location. The corresponding full error covariance matrix statistically represents the location errors in each of the ground points as well as the correlation (similarity) of errors between any pair of the ground points. It is required in order to compute reliable estimates of relative accuracy between arbitrary ground point pairs, and for the proper weighting of the ground points when used as control, in for example, a fusion process. This paper details the above, and presents practical methods for the representation of the full error covariance matrix, ranging from direct representation with large bandwidth requirements, to high-fidelity approximation methods with small bandwidth requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline,
or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features.
The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free
public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create
enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of
automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual
truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial
uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for
conflation methods. Performance results are compiled for DCGIS street centerline features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing
population density estimates for specific human activities under normal patterns of life based largely on information
available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as
range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a
Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge
about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort
which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an
elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a
method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a
statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and
provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well
captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate
Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution
parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that
responds to these challenging enterprise requirements. Though created within the context of population density, this
approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent technological advances in computing capabilities and persistent surveillance systems have led to increased focus on new methods of exploiting geospatial data, bridging traditional photogrammetric techniques and state-of-the-art multiple view geometry methodology. The structure from motion (SfM) problem in Computer Vision addresses scene reconstruction from uncalibrated cameras, and several methods exist to remove the inherent projective ambiguity. However, the reconstruction remains in an arbitrary world coordinate frame without knowledge of its relationship to a xed earth-based coordinate system. This work presents a novel approach for obtaining geoaccurate image-based 3-dimensional reconstructions in the absence of ground control points by using a SfM framework and the full physical sensor model of the collection system. Absolute position and orientation information provided by the imaging platform can be used to reconstruct the scene in a xed world coordinate system. Rather than triangulating pixels from multiple image-to-ground functions, each with its own random error, the relative reconstruction is computed via image-based geometry, i.e., geometry derived from image feature correspondences. In other words, the geolocation accuracy is improved using the relative distances provided by the SfM reconstruction. Results from the Exelis Wide-Area Motion Imagery (WAMI) system are provided to discuss conclusions and areas for future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated Feature Extraction (AFE) plays a critical role in image understanding. Often the imagery analysts extract
features better than AFE algorithms do, because analysts use additional information. The extraction and processing of
this information can be more complex than the original AFE task, and that leads to the “complexity trap”. This can
happen when the shadow from the buildings guides the extraction of buildings and roads. This work proposes an AFE
algorithm to extract roads and trails by using the GMTI/GPS tracking information and older inaccurate maps of roads
and trails as AFE guides.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CACI’s registration and target/object detection algorithm, called “TWIST” for “TWo-axis Image Sorting Technique,” departs from traditional methods. It does not analyze phase relationships or typical spatial features. Rather, it uniquely interprets image complexity, permitting cross-modality (SAR to Visual, for example) operation, and quickly extracts features which reveal the registration location of the sought target or object area. Registering sensor images to geo-referenced images achieves geo-location. It operates at video-rates with no special hardware. We define the algorithm mathematically, compare it to other registration or target/object recognition methods, and apply it to imagery to demonstrate its accuracy and speed. We also discuss anticipated future enhancements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When EO/IR imagery is collected from an aerial platform, geo-registration (estimating the geo-detic coordinates of pixels within an image) is one of the key algorithms used for image and video exploitation. Unfortunately, the performance of geo-registration algorithms is difficult to evaluate due to variabilities in the quality and type of input data. In addition, previous evaluation approaches require large amounts of human intervention, leading to small amounts of quantitative data. We describe a new methodology and associated software used to evaluate and compare geo-registration algorithms in a more automated fashion. This methodology enables several thousand points to be tested as part of an evaluation. Results are presented on a study between several different geo-registration algorithms, demonstrating the utility of this new evaluation approach. Lessons learned on geo-registration performance are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geospatial Information Application Needs and Challenges
Making new connections in existing data is a powerful method to gain understanding of the world. Data fusion is not a
new topic, but new approaches provide opportunities to enhance this ubiquitous process. Interoperability based on open
standards is radically changing the classical domains of data fusion while inventing entirely new ways to discern
relationships in data with little structure. Associations based on locations and times are of the most primary type.
The Open Geospatial Consortium (OGC) conducted a Fusion Standards study with recommendations implemented in
testbeds. In the context of this study, Data Fusion was defined as: “the act or process of combining or associating data or
information regarding one or more entities considered in an explicit or implicit knowledge framework to improve one’s
capability (or provide a new capability) for detection, identification, or characterization of that entity”.
Three categories were used to organize this study: Observation Fusion, Feature fusion, and Decision fusion. The study
considered classical fusion as exemplified by the JDL and OODA models as well as how fusion is achieved by new
technology such as web-based mash-ups and mobile Internet. The study considers both OGC standards as well open
standards from other standards organizations. These technologies and standards aid in bringing structure to unstructured
data as well as enabling a major new thrust in Decision Fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many information fusion solutions work well in the intended scenarios; but the applications, supporting data, and capabilities change over varying contexts. One example is weather data for electro-optical target trackers of which standards have evolved over decades. The operating conditions of: technology changes, sensor/target variations, and the contextual environment can inhibit performance if not included in the initial systems design. In this paper, we seek to define and categorize different types of contextual information. We describe five contextual information categories that support target tracking: (1) domain knowledge from a user to aid the information fusion process through selection, cueing, and analysis, (2) environment-to-hardware processing for sensor management, (3) known distribution of entities for situation/threat assessment, (4) historical traffic behavior for situation awareness patterns of life (POL), and (5) road information for target tracking and identification. Appropriate characterization and representation of contextual information is needed for future high-level information fusion systems design to take advantage of the large data content available for a priori knowledge target tracking algorithm construction, implementation, and application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional reconstruction of objects, particularly buildings, within an aerial scene is still a challenging computer
vision task and an importance component of Geospatial Information Systems. In this paper we present a new homography-based
approach for 3D urban reconstruction based on virtual planes. A hybrid sensor consisting of three sensor elements
including camera, inertial (orientation) sensor (IS) and GPS (Global Positioning System) location device mounted on an
airborne platform can be used for wide area scene reconstruction. The heterogeneous data coming from each of these three
sensors are fused using projective transformations or homographies. Due to inaccuracies in the sensor observations, the
estimated homography transforms between inertial and virtual 3D planes have measurement uncertainties. The modeling
of such uncertainties for the virtual plane reconstruction method is described in this paper. A preliminary set of results
using simulation data is used to demonstrate the feasibility of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geospatial Data Processing Exploitation and Visualization
The largest challenge with all persistent surveillance systems is they require a trade between area coverage and ground object resolution. This trade typically results in provision of imagery where objects desired to be tracked have a small total number of pixels (often less than a few hundred total). With such low pixel counts, traditional target recognition methods become difficult. For this reason, most persistent surveillance tracking systems are based on detection and tracking of image changes. These change-detection tracking systems, however, struggle to maintain tracks through quick maneuvers, stops, obscurations, and dense traffic. Feature descriptors, including template matching, histogram of oriented gradients (HOG), and local binary patterns (LBP) are evaluated for use in the special case of very low pixel count target detection and track maintenance. These dynamic feature-based detection models are incorporated into a change-detection based tracking system. The resulting composite tracking system will be described as applied to EO and MWIR wide area data collected under a variety of conditions. Resulting tracking system improvements and tradeoffs between feature descriptors are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The PICSEL algorithm applies compact and fast-executing algorithms to create a “virtual terrain” in an image. It visually alters image areas so image analysts may intuitively locate, sort, classify, and identify objects, targets, or areas of interest. It also discriminates between areas which may be lossy-compressed or losslessly-compressed, thus improving image compression effectiveness. We first describe the goals, benefits, and challenges of PICSEL’s virtual terrain rendering. Then we apply it to some images to demonstrate and explain its behavior. We also discuss anticipated optimization and normalization techniques - the enhancements that will improve the PICSEL technique in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current video tracking systems often employ a rich set of intensity, edge, texture, shape and object level features combined with descriptors for appearance modeling. This approach increases tracker robustness but is compu- tationally expensive for realtime applications and localization accuracy can be adversely affected by including distracting features in the feature fusion or object classification processes. This paper explores offline feature subset selection using a filter-based evaluation approach for video tracking to reduce the dimensionality of the feature space and to discover relevant representative lower dimensional subspaces for online tracking. We com- pare the performance of the exhaustive FOCUS algorithm to the sequential heuristic SFFS, SFS and RELIEF feature selection methods. Experiments show that using offline feature selection reduces computational complex- ity, improves feature fusion and is expected to translate into better online tracking performance. Overall SFFS and SFS perform very well, close to the optimum determined by FOCUS, but RELIEF does not work as well for feature selection in the context of appearance-based object tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15◦ of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within ±1.0° of the ground truth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geospatial Processing Exploitation and Visualization
Satellite image data fusion is a topic of interest in many areas including environmental monitoring, emergency response, and defense. Typically any single satellite sensor cannot provide all of the benefits offered by a combination of different sensors (e.g., high-spatial but low spectral resolution vs. low-spatial but high spectral, optical vs. SAR). Given the respective strengths and weaknesses of the different types of image data, it is beneficial to fuse many types of image data to extract as much information as possible from the data.
Our work focuses on the fusion of multi-sensor image data into a unified representation that incorporates the potential strengths of a sensor in order to minimize classification error. Of particular interest is the fusion of optical and synthetic aperture radar (SAR) images into a single, multispectral image of the best possible spatial resolution. We explore various methods to optimally fuse these images and evaluate the quality of the image fusion by using K-means clustering to categorize regions in the fused images and comparing the accuracies of the resulting categorization maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlating and fusing video frames from distributed and moving sensors is important area of video matching. It is especially difficult for frames with objects at long distances that are visible as single pixels where the algorithms cannot exploit the structure of each object. The proposed algorithm correlates partial frames with such small objects using the algebraic structural approach that exploits structural relations between objects including ratios of areas. The algorithm is fully affine invariant, which includes any rotation, shift, and scaling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In practical circumstances, a problem that often occurs is to geo-localize an entity from surfacelevel imagery given wide area overhead information and other a priori information that might be used to relate the two views. Given a finite set of GMTI returns and surface-level imagery of a common region of space, we propose a statistical algorithm for the association of surface-level one-dimensional measurements of the finite set to entities of the shared-dimensional wide area overview. Specifically, the problem of fused tracking without reliable range information from a surface-level view of a subset of entities is solved by the association of projections of 3-dimensional movement and position measurements of the GMTI and surface-level imagery. In this process the position of the surface level observer is refined. We expand this algorithm to a set of surface level observers distributed over the region of interest and propose a system of continuous tracking of entities over congested areas. The fusion search algorithm exploits the invariant metric properties of projection in a matched-filter procedure as well as the partialordering of local apparent depth of objects. We achieve O(N) convergence thereby making this algorithm practical for large-N searches. The algorithm is demonstrated analytically and by simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi-
scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses
on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as
wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large
geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics)
hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes
in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper-
jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to
as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and
terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area
motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of
tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection,
trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic
tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization
tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply
geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple
automated tracking algorithms are available to assist the analyst and increase human effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many Image Analysts (IAs) need to represent buildings of interest in imagery in a vector (feature) format, which is called “collecting” the buildings. Collecting is typically done by manually digitizing multiple points around building borders to form outlining polygons, and this can be a time-consuming and fatiguing endeavor. SABOT is a means to “click” once within a building’s image footprint, and automatically obtain the outline. We first describe the amount of imagery where buildings need to be collected, and the need to improve efficiency. Then we review technical challenges and how we addressed them. Next we present results from the current SABOT version. Finally we summarize improvements we anticipate to enhance and optimize SABOT’s performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary’s actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.