Wireless sensor networks have become viable solutions to many commercial and military applications. This research
focuses on utilizing the I-TRM to develop an architecture which supports adaptive, self-healing, and self-aware
intelligent wireless sensor networks capable of supporting mobile nodes. Sensor subsystems are crucial in the
development of projects to test complex systems such as the Future Combat System, a multi-layered system consisting
of soldiers and 18 subsystems connected by a network. The proposed architecture utilizes the Sensor Web Enablement
(SWE), a standard for sensor networks being developed by the Open Geospatial Consortium (OGC), and the Integrated
Technical Reference Model (I-TRM), a multi-layered technical reference model consisting of a behavior-centric
technical reference model, information-centric technical reference model, and control technical reference model. The
designed architecture has been implemented on MPR2400CA motes using the nesC programming language. Preliminary
results show the architecture meets needs of systems such as the Future Combat System. The architecture supports
standard and tailored sensors, mobile and immobile sensors nodes, and is scalable. Also, functionality was implemented
which produces adaptive, self-healing, and self-aware behavior in the wireless sensor network.
This paper describes an information-centric embedded instrumentation systems architecture (EISA) and in particular its technical reference model (TRM) as they relate to the network-centric Test and Training Enabling Architecture (TENA). The embedded instrumentation systems architecture is meant to describe the operational, behavioral and informational requirements for a general "embedded instrumentation test and evaluation system" encased within an operational weapons system. The weapons system application could be in a weapon round, or in an entire large platform such as a warfare fighting unit, battle group or single war-craft such as a ship, plane or tank. TENA and the EISA models have much in common as will be described. The differences lie in the focus of each model's intended application domain. Both are part of the military support communities for aiding the military in training, testing, evaluating, verification or validation of weapons systems.
This paper discusses two novel artificial neural network architectures applied to multi-class classification problems of remote-sensing data. These approaches are 1) a spiking-neural-network model for the partitioning of data into clusters, and 2) a neuron model based on complex-valued weights (CVN). In the former model, the learning process is based on the Spike Timing-Dependent Plasticity rule under the Hebbian Learning framework. With temporally encoded inputs, the synaptic efficiencies of the delays between the pre- and post-synaptic spikes can store the information of different data clusters. With the encoding method using Gaussian receptive fields, the model was applied to the remote-sensing data. The result showed that it could provide more useful information than using traditional clustering method such as K-means. The CVN model has proved to be more powerful than traditional neuron models in solving the XOR problem and image processing problems. This paper discusses an implementation of the complex-valued neuron in NRBF neural networks to improve the NRBF structure. The complex-valued weights are used in the supervised learning part of an NRBF neural network. This classifier was tested with satellite multi-spectral image data and results show that this neural network model is more accurate and powerful than the conventional NRBF model.
This paper describes a novel classification technique-NRBF (Normalized Radial Basis Function) neural network classifier based on spectral clustering methods. The spectral method is used in the unsupervised learning part of the NRBF neural networks. Compared with other general clustering methods used in NRBF neural networks, such as KMeans, the spectral method can avoid the local minima problem and therefore multiple restarts are not necessary to obtain a good solution. This classifier was tested with satellite multi-spectral image data of New England acquired by Landsat 7 ETM+ sensors. Classification results show that this new neural network model is more accurate and robust than the conventional RBF model. Furthermore, we analyze how the number of the hidden units affects training and testing accuracy. These results suggest that this new model may be an effective method for classification of multispectral
satellite image data.
Neural networks are massively parallel arrays of simple processing units that can be used for computationally complicated tasks such as image processing. This paper develops an efficient method for processing remote-sensing satellite data using complex valued artificial neurons as an approach to the problems associated with computer vision-region identification and classification-as they are applied to satellite data. Because of the amount of data to be processed and complexity of the tasks required, problems using ANNs arise, specifically, the very long training time required for large ANNs using conventional computers. These problems effectively prevent an average person from performing his own analysis. The solution presented here uses a recently developed complex valued artificial neuron model in this real-world problem. This model was then coded, run and verified on personal computers. Results show the CVN to be an accurate and computationally efficient model.
This paper establishes the equivalence of the phase only filter and a complex-valued neural network, and then shows how neural network learning can be utilized to design completely novel phase-only-filter based system. By incorporating the neural network based learning, the pattern recognition capability of the phase only type of filter under various transformation and distortions can be enhanced.
Artificial Neural Networks (ANNs) are usually designed around vector-matrix multipliers, where the inputs to the neurons are represented by the vectors while the interconnection weights are represented by the matrix. Optics, with its interference-less free-space communication capabilities, is therefore an efficient and natural way to implement ANNs; however, it is not without practical problems. In electro-optic ANNs, errors due to non-linear or limited accuracy components could exist in the input devices, the interconnection weight matrix, or the output detectors. This report addresses some of those errors in terms of a specific implementation and across several ANN architectures. The electro-optic software only. In the electro-optic layers, light emitting diodes (LEDs) are used to provide the input, liquid crystal spatial light modulators (SLM) serve as the interconnection weight matrixes, and photodiode detectors are the nonlinear thresholding elements. Specific hardware imperfections - nonuniform LED illumination, optical misalignment and cross talk within the SLM, thermal drift in the SLM, and noise and linearity problems in the photodiode detectors circuits - are analyzed and experimentally documented. The impact of these errors on the performance of an ANN is dependent upon the ANN architecture. These error sources, as they effect the design of this or any other electro-optic ANN, are then discussed and evaluated for several representative ANN architectures. Many decisions must be made when designing a practical implementation of an electro-optic ANN. This work provides some basis of how such decisions may be made.
A single step implementation of the joint Fourier transform correlation without any square law detector is proposed. Computer simulation results of the proposed correlator is included to verify its validity.