Analysts who are using visualization methods for big data concept exploration increasingly expect to comprehend more
distinct relationships and prominent concepts in support of their hypotheses or decisions. To expedite this knowledge
discovery process, Vector Space Modeling (VSM) in conjunction with probabilistic analysis enables rapid knowledgebased
relationship discovery while allowing for exploration of multi-embedded concepts than otherwise it is difficult to
perceive. In this paper, we present a technique for intrinsic ontology concepts similarity matching based on VSM for
exploitation and knowledge discovery from multimodality sensors metadata generated in Persistent Surveillance
Systems (PSS). To reduce data dimensionality, Principal Component Analysis (PCA) and Latent Dirichlet Allocation
(LDA) is applied to arrive at more abstract concepts. The proposed technique is able to reveal intrinsic concept
relationships from multi-dimensional metadata structures. Experimental results demonstrate effectiveness of this
approach for analytical ontological patterns exploitation. In this paper, the expediency of this technique for Visual
Analytics application is demonstrated. The result indicates that the newly developed system can significantly enhance
situation awareness and expedite actionable decision making.
This paper presents an ongoing effort towards development of an intelligent Decision-Support System (iDSS)
for fusion of information from multiple sources consisting of data from hard (physical sensors) and soft
(textural sources. Primarily, this paper defines taxonomy of decision support systems for latent semantic data
mining from heterogeneous data sources. A Probabilistic Latent Semantic Analysis (PLSA) approach is
proposed for latent semantic concepts search from heterogeneous data sources. An architectural model for
generating semantic annotation of multi-modality sensors in a modified Transducer Markup Language (TML)
is described. A method for TML messages fusion is discussed for alignment and integration of
spatiotemporally correlated and associated physical sensory observations. Lastly, the experimental results
which exploit fusion of soft/hard sensor sources with support of iDSS are discussed.
In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary’s actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.
There is an emerging need for fusing hard and soft sensor data in an efficient surveillance system to provide accurate estimation of situation awareness. These mostly abstract, multi-dimensional and multi-sensor data pose a great challenge to the user in performing analysis of multi-threaded events efficiently and cohesively. To address this concern an interactive Visual Analytics (VA) application is developed for rapid assessment and evaluation of different hypotheses based on context-sensitive ontology spawn from taxonomies describing human/human and human/vehicle/object interactions. A methodology is described here for generating relevant ontology in a Persistent Surveillance System (PSS) and demonstrates how they can be utilized in the context of PSS to track and identify group activities pertaining to potential threats. The proposed VA system allows for visual analysis of raw data as well as metadata that have spatiotemporal representation and content-based implications. Additionally in this paper, a technique for rapid search of tagged information contingent to ranking and confidence is explained for analysis of multi-dimensional data. Lastly the issue of uncertainty associated with processing and interpretation of heterogeneous data is also addressed.
In a persistent surveillance system huge amount of data is collected continuously and significant details are labeled for future references. In this paper a method to summarize video data as a result of identifying events based on these tagged information is explained, leading to concise description of behavior within a section of extended recordings. An efficient retrieval of various events thus becomes the foundation for determining a pattern in surveillance system observations, both in its extended and fragmented versions. The patterns consisting of spatiotemporal semantic contents are extracted and classified by application of video data mining on generated ontology, and can be matched based on analysts interest and rules set forth for decision making. The proposed extraction and classification method used in this paper uses query by example for retrieving similar events containing relevant features, and is carried out by data aggregation. Since structured data forms majority of surveillance information this Visual Analytics model employs KD-Tree approach to group patterns in variant space and time, thus making it convenient to identify and match any abnormal burst of pattern detected in a surveillance video. Several experimental video were presented to viewers to analyze independently and were compared with the results obtained in this paper to demonstrate the efficiency and effectiveness of the proposed technique.
This survey paper provides a review of tools and concepts of visual analytics, and the challenges faced by researchers
developing application for knowledge discovery. A comparison is made based on analytic features, its ability to
categorize data, the modeling procedures, visual representation, interoperability, and its reliability and portability. The
issues related to heterogeneous data, its scalability and multi-dimensionality is also explored. An efficient, intelligent,
interactive and robust visual analytics system allows the discovery of information hidden in a massive and dynamic
volume of data, especially in a surveillance system thus creating an effective situation awareness of the environment.
While visual analytics is hugely important in knowledge discovery, it is necessary for developers to avoid information
overload due to inappropriate, irrelevant and uncertain data due to random or fuzzy sensor inputs, also known as noise.
The discovered knowledge is the basis for adaptive situation awareness, as it often provides information beyond the
perception of human cognitive mind. The tools and concepts researched for this article includes addressing the human
computer interaction aspect for intelligent, adaptive decision making from multiple information resources. An attempt is
made in this paper to combine the strengths of smart search and data analysis with visual perception and interactive
analysis capability of the user.