A numeric solution for the fusion of multiple tracks produced form an arbitrary number of asynchronous measurements has been recently developed. This track fusion solution is a weighted sum of the values associated with the previous fused estimate and the multiple individual estimates. This optimal asynchronous track fusion algorithm (OATFA) has properties that are identical to the Kalman filter. However, the deficiencies of the Kalman filter when tracking maneuvering targets are also exhibited by the OATFA but can be overcome with the use of the Interacting Multiple Model (IMM) algorithm. Consequently, it should be possible to replace the Kalman filter commonly employed in a conventional IM algorithm with the OATFA to from the IMM- OATFA. The IMM-OATFA will be developed and simulation result will be used to compare this performance with a conventional IMM tracker.
In a typical multi-target tracking problem, the presence of random interference introduces uncertainty into the origin of the measurements. A data association technique is then required to associate each measurement with the appropriate target or to discard it as arising form clutter or false alarms. In this paper, a neural network based multi target tracking algorithm employing a Hopfield network is presented. The energy function of the Hopfield network is derived form a comparison of the constraints of the data association problem to those of the well-known traveling salesman problem. By minimizing the energy function, through the process of simulate annealing, the data association probabilities are computed and applied to a Kalman filter tracker for each target. The performance of the proposed algorithm is compared to the conventional techniques. Simulation results show that the proposed neural network tracker has satisfactory performance as compared to the Joint Probabilistic Data Association filter.
Modern naval battle forces generally include many different platforms each with its own sensors, radar, ESM, and communications. The sharing of information measured by local sensors via communication links across the battle group should allow for optimal or near optimal decision. The survival of the battle group or members of the group depends on the automatic real-time allocation of various resources. A fuzzy logic algorithm has been developed that automatically allocates electronic attack resources in real- time. The particular approach to fuzzy logic that is used is the fuzzy decision tree, a generalization of the standard artificial intelligence technique of decision trees. The controller must be able to make decisions based on rules provided by experts. The fuzzy logic approach allows the direct incorporation of expertise forming a fuzzy linguistic description, i.e. a formal representation of the system in terms of fuzzy if-then rules. Genetic algorithm based optimization is conducted to determine the form of the membership functions for the fuzzy root concepts. The isolated platform and multi platform resource manager models are discussed as well as the underlying multi-platform communication model. The resource manager is shown to exhibit excellent performance under many demanding scenarios.
This paper presents an adaptation of a comprehensive sensor management model, initially developed for C3I applications, to a new class of problems, a data rich, information poor, sensor rich environment. The sensor management model described is a hybrid distributed and hierarchical model in which the sensor scheduling function is distributed across system functional or physical boundaries with global oversight of mission goals and information requests maintained by a centralized Mission Manager. The introduction of a meta-scheduler block is only a n artifact of the opportunity afforded by the large number of sensors to implement a natural subdivision of a single sensor schedule into several spatially distributed sensor schedulers. System performance is enhanced by allowing local autonomy at the sensor, by distributing sensor scheduling among subsystems, and through an interrupt driven process in which local sensor measurements are abstracted to obtain global context. An aircraft health and usage monitoring system, a contemporary example of a sensor rich environment, is used to illustrate the issues involved in extending sensor management beyond C3I environments.
Connection Management plays a key role in both distributed 'local' network-centric and 'globally' connected info- centric systems. The role of Connection Management is to provide seamless demand-based sharing of the information products. For optimum distributed information fusion performance, these systems must minimize communications delays and maximize message throughput, and at the same time take into account relative-sensors-targets geometrical constraints and data pedigree. In order to achieve overall distributed 'network' effectiveness, these systems must be adaptive, and be able to distribute data s needed in real- time. A system concept will be described which provides optimum capacity-based information scheduling. A specific example, based on a satellite channel, is used to illustrate simulated performance results and their effects on fusion systems performance.
A pressing concern in modern fighter aircraft cockpit design is how to present and reduce large amounts of information obtained from several sensor observations of the same object. Currently, sensor observations are presented individually as overlays or in different displays requiring the pilot to control each sensor and integrate observations. The increased number of sensors and communication networks covering extensive ranges has, however, led to an unacceptable situation that hampers pilots' situation awareness and decision-making. Therefore, some from of automatic information management is necessary to support the pilot. Although considerable technological research has been conducted on automatic sensor fusion and management of multisensor and multi source tracking systems, only little is known about how to integrate systems capabilities with pilots' decision-making.
Although SAR has demonstrated excellent performance in stationary target identification, SAR resolution suffers in moving target scenarios. High Range Resolution (HRR) radar appears to be an attractive alternative in applications to moving target identification because HRR target signature can provide target scatterer information with high range resolution. Since many HRR processing steps, such as feature extraction and clutter suppression, are based on underlying modeling assumptions, devising a reliable physics-based HRR model for moving targets has become an increasingly important topic. In this paper, we derive a scattering-based HRR moving target model. However, the general form of the derived model is quite complex, and this complexity makes subsequent analysis difficult. We therefore simplify the complex model to obtain different simplified versions that facilitate the utility of the models. Simplifications is achieved by instantiating the parameters in this model with radar and target parameters from the real world, and then retaining only those terms with dominant value. A series of reliable, yet theoretically tractable models, are obtained with different degrees of simplification. The contributions of this paper are as follows: (1) Two new physics-based HRR moving target models with different degrees of simplification are presented; (2) These models make no assumptions regarding the distribution of the clutter; (3) Performance boundaries on the subsequent feature extraction algorithms are derived and delineated.
In this paper, we discuss a novel method, base don singularity representation, for integrating a rotational invariant visual object extraction and understanding technique. This new compression method applies Arnold's Differential Mapping Singularities Theory in the context of 3D object projection onto the 2D image plane. It takes advantage of the fact that object edges can be interpreted in terms of singularities, which can be described by simple polynomials. We discuss the relationship between traditional approaches, including wavelet transform and differential mapping singularities theory or catastrophe theory (CT) in the context of image understanding and rotational invariant object extraction and compression. CT maps 3D surfaces with exact results to construct an image-compression algorithm based on an expanded set of operations. This set includes shift, scaling rotation, and homogeneous nonlinear transformations. This approach permits the mathematical description of a ful set of singularities that describes edges and other specific points of objects. The edges and specific points are the products of mapping smooth 3D surfaces, which can be described by a simple set of polynomials that are suitable for image compression and ATR.
In this paper, the Bayesian Data Reduction Algorithm (BDRA) is applied to reducing the dimensionality of a data set that contains class-specific feature. The BDRA uses the probability of error, conditioned on the training data, and a 'greedy' approach for reducing irrelevant features from the data. Here, the BDRA is shown to be an effective means of selecting binary valued class-specific feature, where the remaining non-class-specific features are irrelevant to correct classification. In fact, performance results reveal that when using a small number of training data relative to feature dimensionality, the BDRA outperforms the appropriate class-specific classifier.
Classification is a central task in pattern recognition. To classify objects into object classes, features are calculated from objects. Objects classes are determined by class boundaries. If it is thus possible to reconstruct objects from their features, variations of feature on their objects and on class boundaries can be studied explicitly. In this work the classical steps in pattern recognition form object space to feature space are extended by the concept of physical similarity and by a back-transform form feature space to object space. The analytic assumptions and numeric properties of this back-transformation from feature space into object space are investigated using gray scale images. Higher moments of these grey scale images are computed and later used for reconstruction. When a grey scale image is written as a discrete valued 2D function, the function lies in the Hilbert space of quadratic integrable functions. Quadratic integrable functions can be written as a series of orthonormal functions, where the coefficients of the series are calculated using a scalar product of the image and the orthonormal base functions. Using Legendre polynomials as base functions, the scalar products for the determination of the series' coefficients can be calculated from the moments and the polynomials coefficients only thus yielding the back-transformation.
A study was conducted with military users expert in vehicle identification to understand the effects of various types of prior knowledge on their performance in identifying vehicles in thermal imagery. Subject's abilities to identify line drawings and color photographs, as well as their ability to accurately describe vehicle engine and exhaust locations were compared to their ability to identify the same vehicles' thermal signatures. High correlation was found between identification performance on all types of prior knowledge and thermal imagery identification. The most significant correlation was knowledge of vehicle engine and exhaust locations with respect to thermal vehicle identification. The authors concluded that familiarity with vehicle sin line drawings and photos, without knowledge of thermal signature and emissive sources, such as engines and exhausts, was not sufficient for effective performance in combat vehicle identification with thermal sights.
This paper describes a system, which compares aerial photographs of the same terrain taken at different times and tires to recognize straight-edged cultural features that have changed. This work is intended to be highly robust, handling very different lighting conditions, weather, times of year, camera, and film between the images to be compared. Our system AERICOMP is designed to facilitate battlefield terrain modeling by permitting automatic updates form new images. AERICOMP does coarse registration, image correction, feature detection, automatic refined registration, feature difference detection and reduction, feature difference presentation and operator acceptance, difference identification, and database update. It emphasizes line segments for comparisons because differences in them are more robust for photometric changes between terrain images. In addition, line segment comparisons require less computation than pixel comparisons and are more compatible with identification tasks. For our intended application of battlefield terrain modeling, detecting changes in man-made structures is of much greater importance than changes in vegetation, and line segments are the key to identifying such structures. We show results involving change analysis between color IR and black/white USGS photographs of the same area six years apart. Even a mostly automatic system benefits form user interacting at key points. AERICOMP exploits user judgements at the beginning and end of its processing to assist in coarse registration and to approve the significance of any differences found. AERICOMP is currently under development at the Naval Postgraduate School, and is supported by the TENCAPS project under the US Navy.
In past presentations, in the book Mathematics of Data Fusion, and in the recent monograph An Introduction to Multisource-Mulitarget Statistics and Its Applications, we have shown how Finite-Set Statistics (FISST) provides a unified foundation for the following aspects of multisource- multitarget data fusion: detection, identification, tracking, multi-evidence accrual, sensor management, performance estimation, and decision-making. In this paper we apply FISST to the distributed fusion problem: i.e., fusing the outputs produced by geographical separated data fusion systems. We propose two different approaches: optimal and robust. Optimal distributed fusion is achieved via a direct FISST multitarget generalization of the Chong-Mori- Change single-target track-fusion technique. Robust distributed fusion is achieved by using FISST to generalize the Uhlmann-Julier Covariance Intersection method to the multitarget case.
Many decision-making systems involve image processing that converts input sensor data into output images having desirable features. Typically, the system user selects some processing parameters. The processor together with the input image can then be viewed as a system that maps the processing parameters into output features. However, the most significant output features often are not numerical quantities, but instead are subjective measures of image quality. It can be a difficult task for a user to find the processing parameters that give the 'best' output. We wish to automate this qualitative optimization task. The key to this is incorporation linguistic operating rules and qualitative output parameters in a numerical optimization scheme. In this paper, we use the test system of input parameter selection for 2D Wiener filtering to restore noisy and blurred images. Operating rules represented with random sets are used to generate a nominal input-output system model, which is then used to select initial Wiener filter input parameters. Whenthe nominally optimal Wiener filter is applied to an observed image, the operator's assessment of output image quality is used in an adaptive filtering algorithm to adjust the model and select new input parameters. Test on several images have confirmed that with a few such iterations, a significant improvement in output quality is achieved.
During the last two decades IR Goodman, HT Nguyen and others have shown that several basic aspects of expert-systems theory-fuzzy logic, Dempster-Shafer evidence theory, and rule-based inference-can be subsumed within a completely probabilistic framework based on random set theory. In addition, it has been shown that this body of research can be rigorously integrated with multisensor, multitarget filtering and estimation using a special case of random set theory called 'finite-set statistics' (FISST). In particular, FISST allows the basis for standard tracking and ID algorithms-nonlinear filtering theory and estimation theory; to be extended to the case when evidence can be highly 'ambiguous' because of extended operating conditions, e.g. when images are corrupted by effects such as dents, mud etc. This paper extends those results by showing that the technique is relatively insensitive to the uncertainty model used to construct the ambiguous likelihood function.
Tracking of target pose is important for ATR in situations where there is a relative motion between the targets and the sensor(s). Taking a Bayesian approach, we formulate the problem of jointly tracking the target positions and orientations as a problem in nonlinear filtering. Combining pertinent ideas form importance sampling and sequential methods, we apply an iterative Monte Carlo approach to solve for MMSE solutions. This tracking algorithm is demonstrated for tracking individual targets in a simulated environment.
Last year at this conference we described initial result in the practical implementation of a unified, scientific approach to performance measurement for data fusion algorithms. The proposed approach is based on 'finite-set statistics' (FISST), a generalization of conventional statistics to multisource, multitarget problems. Finite-set statistics makes it possible to directly extend Shannon-type information metrics to multisource, multitarget problems in such a way that 'information' can be defined and measured even though any given end-user may have conflicting or even subjective definitions of what 'information' means. In this follow-on paper we describe progress on this work completed over the last year. We describe the performance of additional FISST metrics, including metrics which estimate the amount of information attributable to specific algorithm functions and which include the classification performance of the fusion algorithm. In addition we consider metrics that can be applied when ground truth is not known, based on comparisons to complete uncertainty.
Last year at this conference we described initial result in the practical implementation of a unified, scientific approach to performance measurement for data fusion algorithms, The proposed approach is based on 'finite-set statistics' (FISST), a generalization of conventional statistics to multisource, multitarget problems. Finite-set statistics makes it possible to directly extend Shannon-type information metrics to multi-source, multitarget problems in such a way that 'information' can be defined and measured even though any given end-user may have conflicting or even subjective definitions of what 'informative' means. In last year's paper, we described scientific performance evaluation for Level 1 data fusion. In this follow-on paper we describe a generalization of the FISST approach to Level 4 data fusion, specifically sensor management. Our Level 4 MoEs are based on the fact that sensor management is a support function: its purpose is to redirect collection assets in order to improve the input data into- and therefore the output performance of a Level 1 fusion algorithm. Accordingly, our basic MoE is 'excess information'. By using a sensor scheduler to simulate various sensor management algorithms, we established the effectiveness and intuitiveness of two different sensor management MoEs: the multitarget Kullback-Leibler information metric, and the Hausdorff multitarget miss-distance metric.
The work presented here is pat of a generalization of Bayesian filtering and estimation theory to the problem of multisource, multitarget, multi-evidence unified joint detection, tracking, and target ID developed by Lockheed Martin Tactical Defense Systems and Scientific Systems Co., Inc. Our approach to robust joint target identification and tracking was to take the StaF algorithm and integrate it with a Bayesian nonlinear filter, where target position, velocity, pose, and type could then be determined simultaneously via maximum a posteriori estimation. The basis for the integration between the tracker and classifier is base don 'finite-set statistics' (FISST). The theoretical development of FISST is a Lockheed Martin ongoing project since 1994. The specific problem addressed in this paper is that of robust joint target identification and tracking via fusion of high range resolution radar (HRRR) - from the automatic radar target identification (ARTI) data base - signatures and radar track data. A major problem in HRRR ATR is the computational load created by having to match observations against target models for every feasible pose. If pose could be estimated efficiently by a filtering algorithm from track data, the ATR search space would be greatly reduced. On the other hand, HRRR ATR algorithms produce useful information about pose which could potentially aid the track-filtering process as well. We have successfully demonstrated the former concept of 'loose integration' integrating the tracker and classifier for three different type of targets moving on 2D tracks.
Leveraging human fusion can enhance computational moving target recognition algorithms. Cognitive models exploit a human's visual discrimination of object color, size, motion, and orientation. From the biological pathways of the magnocellular and parvocellular pathways, information sets are fused for a single perception of an object. For instance, a human tracking a target could take advantage of a moving target relative to stationary objects or a large object amongst smaller objects. Cognition, or attention to salient information, can be explicitly represented as a set of information outside a covariance boundary. The paper proposes a cognitive-based attentional model that leverages information asymmetries for moving target recognition.
Adaptive decision fusion represents a unique addition to the ATR community interest in Wide Area Surveillance. Isolating targets form non-targets before they reach an ATR processing algorithm can significantly reduce subsequent ATR processing burdens. As the volume of imagery increases from diverse new sensor systems, adaptive methods will be required to reduce early-stage false alarms to levels that can be handled by more computationally intensive down-stream processing. Change detection algorithms solve part of the problem by reducing false alarms, but the mapping transformation form image space to change space also induces a new set of false reports. The Adaptive Multi-Image Decision Fusion process will provide a basis for fusing and interpreting these change events and 'bundling' them together in a feature set so that they can be dealt with by a feature-based classifier. The decision level fusion will use only feature provided by the component change detection modules . This acts as the first stage of screening to determine which sensor's and which algorithm's output should be fused and adaptively determines the corresponding optimal fusion rule. A complete set of fusion rules are examined for the two- detector case for collected SAR imagery, and theoretical considerations are discussed for the three-detector case. Each rule compares the relative performance from each change detection algorithm. The system determines the quality of each report with respect to the level of clutter, and determines the representative fusion rule. Examples are provided.
The use of IRST sensors is now generally accepted as essential on board ships but their angular 2D target reports are sometimes insufficient when Electromagnetic Control plans are in use, for instance those with switched off radar emission in case of Anti Radar Missiles threat. The principle of an on-board 'full silent search function' (FSSF) consists in fusing reports form two passive sensor on the same base: an IRST sensor working in IR wavelengths and an ESM sensor scanning a larger but different wavelength spectrum. The primary goal is to obtain a full covertness level, while providing at the earliest possible time, classified and accurate 2D target Indication to the Combat System. More precisely, in a naval context of short or very short range air defense, the main idea pushed ahead is to take into account the capabilities of the ESM passive sensor in detecting and classifying emitters such as Airborne Radar or on-board missile EM seekers to efficiently enhance the IRST detection and tracking process. The paper addresses this situation and shows the main contribution of a FSSF compared to standard Search Function using an IRST placed in different false alarms conditions.
An interesting problem arises in multi platform-multisensor information fusion system when sources disagree on the classification or identity of an unknown entity. If there is disagreement, the ideal situation ins a high level of confidence form a single source with complete disagreement in the remaining sources. This is not always the case and situations may arise where the winner is only successful by a small margin and there is collective disagreement amongst the losing sources. This paper describes a new terminology, disfusion, which is used for the characterization of disagreement between information sources and can enhance the final conclusion of a fusion system.
Data acquired form multiple sensors provides a means for defining a knowledge base and a current situation scenario. The data is accepted and integrated as intelligence with the use of signal- and symbol-level fusion to translate the raw data into intelligence information that can be used to baseline the knowledge of a control system. An application of this technique is applied to a robotic inspection and dismantlement system. This system is used to dismantle material sin a potentially hazardous environment that involves nuclear waste. The objective is to gather information about the environment using a suite of sensors to include range, electro-optical and proximity sensors to develop a current situation and initiate cues to the control system. By including evidential reasoning in the fusion process, all of the data that is gathered can be used to build the knowledge base where lower belief factors are attributed to things with significant uncertainty. Logical inferences are also incorporated to develop certainty measures and truth values. The results suggest an approach to multisensor fusion for decision-based control using a knowledge base and current situation scenario framework.
This paper presents a new image data fusion scheme by combining median filtering with self-organizing feature map neural networks. The scheme consists of three steps: (1) pre-processing of the images, where weighted median filtering removes part of the noise components corrupting the image, (2) pixel clustering for each image using self- organizing feature map neural networks, and (3) fusion of the images obtained in Step (2), which suppresses the residual noise components and thus further improves the image quality. It proves that such a three-step combination offers an impressive effectiveness and performance improvement, which is confirmed by simulations involving three image sensors.
The mammalian visual cortex has presented unique visual processing algorithms,. The system rely on spiking neural networks that are coupled leaky-integrators. It has been proposed that the visual system converts 2D images into 1D signatures. So far, efforts to create digital algorithms have been thwarted by interference amongst objects in the input space. Here the marriage of curvature flow with pulse image processing creates a new system in which the expanding autowaves of individual objects in an input scene do not interfere. Thus, it becomes possible to identify multiple objects in a scene solely through the 1D signature.
The area of test and measurement is changing rapidly because of the recent developments in software and hardware. The test and measurement systems are increasingly becoming PC based. Most of these PC based systems use graphical programming language to design test and measurement modules called virtual instruments (Vis). These Vis provide visual representation of dat or models, and make understanding of abstract concepts and algorithms easier. This allows users to express their ideas in a concise manner. One such virtual instruments package is LabVIEW from National Instruments Corporation at Austin, Texas. This software package is one of the first graphical programming products and is currently used in number of academic institutions, industries, Department of Defense graphical programming products and is currently sued in number of academic institutions, industries, Department of Defense, Department of Energy, and National Aeronautics and Space Administration for various test, measurement, and control applications. LabVIEW has an extensive built-in VI library that can be used to design and develop solutions for different applications. Besides using the built-in VI library that can be used to design and develop solutions for different applications. Besides using the built-in VI modules in LabVIEW, the user can design new VI modules easily. This paper discusses the use of LabVIEW to design and develop digital signal processing VI modules such as Fourier Analysis and Windowing. Instructors can use these modules to teach some of the signal processing concepts effectively.
As we published in the last eight years, when the analog-to- binary mapping of any M training CLASS patterns are not PLI, then a one-layered preceptron (OLP) just cannot learn this mapping at all no matter what learning rule we use, because the solution of the connection matrix does not exist. However, as we derived form this PLI condition, which is the most general separability condition for an OLP, a PCTLP system can still be used to separate these closely relate and 'inseparable' patterns according to the targeted outputs Vm. This paper repots the theory and the design of this NOVEL PCTLP system.
Image compression is an important research domain in image processing. Recently, several neural netowkr (NN) based schemes developed in this are. In particular, constructive feed-forward neural networks have been attempted by many researchers to this problem. The constructive NN-based schemes are promising given their lower training cost, satisfactory performance and automatic determination of proper network size. In this paper, we first consider a NN- based technique that uses a constructive one-hidden-layer FNN for image compression. In standard NN-based schemes when a new hidden unit is added to the net the whole net is retrained while in this scheme the input-side weights are first trained and then all the network output-side weights are adjusted, resulting in a considerably less computational efforts. Next, two pruning techniques are proposed to remove the unnecessary input-side weights during the network construction, without sacrificing the performance of the network, to yield a smaller and a more economical network. To confirm the effectiveness of the prosed techniques, we have applied them to both regression problems and image compression. It has been found that a significant number of weights can be pruned without degenerating the network performance.
We present new methods for computing fundamental performance limits for parametric shape estimation in inverse scattering problems, such as passive radar imaging. We evaluate Cramer- Rao lower bounds (CRB) on shape estimation accuracy using the domain derivative technique from nonlinear inverse scattering theory. The CRB provides an unbeatable performance limit for nay unbiased estimator, and under fairly mild regularity conditions, is asymptotically achieved by the maximum likelihood estimator (MLE). Furthermore, the resultant CRBs are used to define a global confidence region, centered around the true boundary, in which the boundary estimate lies with a prescribed probability. These global confidence regions conveniently display the uncertainty in various geometric parameters such as shape, size, orientation, and position of the estimated target, and facilitate geometric inferences. Numerical simulations are performed using the layer approach and the Nystrom method for computation of domain derivatives, and using Fourier descriptors for target shape parameterization. This analysis demonstrates the accuracy and generality of the proposed methods.
A laser triangulation range finder based on a chaotic and detection scheme is presented. An elementary non-linear electronic oscillator composed by two operation amplifiers with feedback current form two antiparallel diodes generates a chaotic signal that is used to generate a chaotic clock modulation with a well-defined broad band spectrum. This chaotic clock modulates a laser beam that is transmitted and received by a collecting optics in a laser triangulation range finder scheme. A band limited phase delay equalized amplifier sends the received signal to a balanced demodulator using the same chaotic generated signal as 'local oscillator'. A low pass filter is tuned to assure good compromise against noise immunity and the desired response speed. This modulator scheme allows several laser stations to operate in same working area, avoiding carefully adjusted field-of-view screening and cross-detection false alarm due to the interference of other laser stations. The chaotic modulator can be used as an alternative for microprocessor based pseudo random sequence generator when board space or cost is a critical system specification. The laser triangulation range finder has a range of 0.5m to 2m using a 3mW class IIIa visible laser, with precision of 5 mm.
The reliable detection of objects of interest on inhomogeneous background base don image data is a typical detection and recognition problem in many practical applications. In this paper, an algorithm of local object detection is described in the context of change detection based on the difference between two images obtained from the same scene. The proposed detection method using multi-scale relevance function is a model based-approach which takes into account the planar shape model of objects of interest and the regression model of intensity function with respect to objects and background. The image relevance function is an image local operator whose local extrema indicate on the locations of objects or their salient parts termed as primitive patterns. The image fragment centered at the maximum point of the relevance function represents a region of attention. A structure-adaptive binarizaiton is performed within each region of attention by using variable threshold. The comparative testing of the proposed algorithm and the known techniques have shown better performance of the relevance function approach at the approximately same dely of detection.
In this paper, a general problem of the distributed detection of a constant multidimensional signal with unknown parameters in a background of a zero-mean Gaussian noise with unknown varying covariance matrix is considered. This problem is encountered in many situations of decentralized processing involving a large number of sensors, where noisy processes at these sensors have different covariance matrices. We discuss test statistics at the sensors, where a hypothesis testing results in a sequence of 1 and 0, and at the fusion center, where the k out of m decision rule regarding the presence or the absence of a signal is used. Test statistics at the sensors are obtained by means of a generalized maximum likelihood ratio test. This test is invariant to intensity changes in the noise background and achieves a fixed probability of a false alarm. No learning process is necessary in order to achieve the constant false alarm rate. Operating in accordance to the local noise situation, the test is adaptive. It is shown that this test is uniformly most powerful invariant (UMPI) and robust against departures from normality in the following sense. It is still UMPI in a broad class of distributions, and the null distribution under any member of the class is the same as that under normality.
Distributed collaborative visualization systems represent a technology whose time has come. Researchers at the Fraunhofer Center for Research in Computer Graphics have been working in the areas of collaborative environments and high-end visualization systems for several years. The medical application. TeleInVivo, is an example of a system which marries visualization and collaboration. With TeleInvivo, users can exchange and collaboratively interact with volumetric data sets in geographically distributed locations. Since examination of many physical phenomena produce data that are naturally volumetric, the visualization frameworks used by TeleInVivo have been extended for non-medical applications. The system can now be made compatible with almost any dataset that can be expressed in terms of magnitudes within a 3D grid. Coupled with advances in telecommunications, telecollaborative visualization is now possible virtually anywhere. Expert data quality assurance and analysis can occur remotely and interactively without having to send all the experts into the field. Building upon this point-to-point concept of collaborative visualization, one can envision a larger pooling of resources to form a large overview of a region of interest from contributions of numerous distributed members.
The theoretic ground of a local-topological method for determining a minimal attractor embedding dimension is prosed. The digitized cardiosignal investigation base don nonlinear dynamic approach is presented. The computer confirmation of the theoretical results is performed.
Rainfall drop size distribution (DSD) measurements made by single disdrometers at isolated ground sites have traditionally been used to estimate the transformation between weather radar reflectivity Z and rainfall rate R. Despite the immense disparity in sampling geometries, the resulting Z-R relation obtained by these single point measurements has historically been important in the study of applied radar meteorology. Simultaneous DSD measurements made at several ground sites within a microscale area may be used to improve the estimate of radar reflectivity in the air volume surrounding the disdrometer array. By applying the equations of motion for non-interacting hydrometers, a volume estimate of Z is obtained from the array of ground based disdrometers by first calculating a 3D drop size distribution. The 3D-DSD model assumes that only gravity and terminal velocity due to atmospheric drag within the sampling volume influence hydrometer dynamics. The sampling volume is characterized by wind velocities, which are input parameters to the 3D-DSD model, composed of vertical and horizontal components. Reflectivity data from four consecutive WSR-88D volume scans, acquired during a thunderstorm near Melbourne, FL on June 1, 1997, are compared to data processed using the 3D-DSD model and data form three ground based disdrometers of a microscale array.
A conceptual system to produce 3D thermal models of tires for tire inspection and defect characterization is proposed. The system uses registered range and thermal information to build highly detailed 3D models using either a volumetric or mech-based approach. To achieve this goal, two narrow bandpass filters are used in conjunction with two IR cameras to obtain the true temperature of the target body. The thermal information is then translated to texture data and mapped as an overlay onto a 3D model. The textures are realizable through the use of three-component texture maps that include rgb values to specify the texture coordinates in the plane. The objective is to generate a movie loop depicting a tire endurance test so that an operator may analyze potential tire defects through texture characteristics that appear as a thermal signature changing dynamically with time.
In this paper, six trackers are reviewed and their performance is compared. Real radar target data are used for this study, where the data were collected from commercial and military aircrafts in various conditions. Since the true target trajectors are unavailable, the prediction RMS error is used as the performance criterion. The six trackers are compared in terms of their performance and computation complexity as well. Evaluated results shows that non-model trackers generally outperform model based trackers. A brief discussion is given.
Previous nonlinear filtering research has shown that by directly estimating the probability density of the target state, weak and closely spaced targets can be tracked without performing data association. Data association imposes a heavy burden, both in its design where complex data management structures are required and in its execution which often requires many computer cycles. Therefore, avoiding data association can have advantages. However, some have suggested that data association is required to estimate and correct sensor biases that are nearly always present so avoiding it is not a practical option. This paper demonstrates that target numbers, target tracks, and sensor biases can all be estimated simultaneously using association-free nonlinear methods, thereby extending the useful range of these methods while preserving their inherent advantages.