Previously, we proposed and implemented a Self-structuring Data Learning Algorithm. This realized software package and the concept are still progressing. Earlier, it was tested with synthetic data and exhibited interesting results. The objectives of this paper are testing the algorithm with raw infrared and visual images and updating the algorithm as required. We first performed registration transformation and detection from the images with an existing software package. We then registered the detections with the registration transformations from both infrared and visual images. The registered detections were delivered to the algorithm for target detection and tracking without modification. Results revealed inability to handle very noisy infrared image features. To overcome this problem, we developed multiscale grid processing to improve detection classification in the algorithm. This updated algorithm shows much better target detection and tracking with the real-world data. More algorithm enhancements are in work such as incorporating pattern recognition, classification, and fusion.
The two stage hierarchical unsupervised learning system has been proposed for modeling complex dynamic surveillance
and cyberspace systems. Using a modification of the expectation maximization learning approach, we introduced a three
layer approach to learning concepts from input data: features, objects, and situations. Using the Bernoulli model, this
approach models each situation as a collection of objects, and each object as a collection of features. Further complexity
is added with the addition of clutter features and clutter objects. During the learning process, at the lowest level, only
binary feature information (presence or absence) is provided. The system attempts to simultaneously determine the
probabilities of the situation and presence of corresponding objects from the detected features. The proposed approach
demonstrated robust performance after a short training period. This paper discusses this hierarchical learning system in a
broader context of different feedback mechanisms between layers and highlights challenges on the road to practical
In a previous SPIE paper we described several variations of along-track interferometry (ATI), which can be used for
moving target detection and geo-location in clutter. ATI produces a phase map in range/Doppler coordinates by
combining radar data from several receive channels separated fore-and-aft (along-track) on the sensor platform. In
principle, the radial velocity of a moving target can be estimated from the ATI phase of the pixels in the target signature
footprint. Once the radial velocity is known, the target azimuth follows directly. Unfortunately, the ATI phase is
wrapped, i.e., it repeats in the interval [-π, π], and therefore the mapping from ATI phase to target azimuth is non-unique.
In fact, depending on the radar system parameters, each detected target can map to several equally-likely azimuth values.
In the present paper we discuss a signal processing method for resolving the phase wrapping ambiguity, in which the
radar bandwidth is split into a high and low sub-band in software, and an ATI phase map is generated for each. By
subtracting these two phase maps we can generate a coarse, but unambiguous, radial velocity estimate. This coarse
estimate is then combined with the fine, but ambiguous estimate to pinpoint the target radial velocity, and therefore its
azimuth. Since the coarse estimate is quite sensitive to noise, a rudimentary tracker is used to help smooth out the phase
errors. The method is demonstrated on Gotcha 2006 Challenge data.
Autonomous situational awareness (SA) requires an ability to learn situations. It is mathematically difficult because in
every situation there are many objects nonessential for this situation. Moreover, most objects around are random,
unrelated to understanding contexts and situations. We learn in early childhood to ignore these irrelevant objects
effortlessly, usually we do not even notice their existence. Here we consider an agent that can recognize a large number
of objects in the world; in each situation it observes many objects, while only few of them are relevant to the situation.
Most of situations are collections of random objects containing no relevant objects, only few situations "make sense,"
they contain few objects, which are always present in these situations. The training data contains sufficient information
to identify these situations. However, to discover this information all objects in all situations should be sorted out to find
regularities. This "sorting out" is computationally complex; its combinatorial complexity exceeds by far all events in the
Universe. The talk relates this combinatorial complexity to Gödelian limitations of logic. We describe dynamic logic
(DL) that quickly learns essential regularities-relevant, repeatable objects and situations. DL is related to mechanisms
of the brain-mind and we describe brain-imaging experiments that have demonstrated these relations.