An important difference between biological vision systems and their electronic counterparts is the large number of feedback signals controlling each aspect of the image collection process. For every forward path of information in the brain, from sensor to comprehension, there appears to be several neural bundles which send information back to the sensor to modify the way the information is collected. In this paper we will examine the role of such feedback signals and suggest algorithms for intelligent processing of images directly on the focal plane, using feedback. We consider first what form these signals might take and how they can be used to implement functions common to conventional image processing with the objective of moving the computation out of the digital domain and place much of its on the focal plane, or analog processing close to the focal plane. While this work falls under the general heading of artificial neural networks, it goes beyond the static processing of signals suggested by the McCulloch and Pitts model of the neuron and the Laplacian image processing suggested by Carver Mead by including the dynamics of temporal encoding in the analysis process.
Optical aberrations are characterized by orthogonal basis functions composed of discretized Zernike polynomials. The coefficients associated with each Zemike polynomial can be measured using a Phase Diversity wave front sensing technique. Nonlinear optimization techniques are utilized to calculate the Zernike coefficients in a serial manner. Even though this traditional method is attractive, it is computationally a very formidable task to calculate several Zernike coefficients for a given system. Hence the method is not applicable in a real time image reconstruction scheme. In this paper we first show that each Zemike coefficient can be calculated independently in a parallel fashion from each other. Our method uses nonlinear optimization of a single variable only. We use a modified Gonsalves error metric function involving only a single unknown aberration coefficient. Next, we describe an implementation of the algorithm on the IBM SP2 parallel computer. We used the PVM software for parallelizing the computational tasks across the processors in a "master/slave" fashion. We will show that the computation can be performed in an efficient manner using this strategy.
Key Words: optical aberration; Zernike polynomials; Zernike coefficients; phase diversity; parallel computation
Unlike biological vision, most techniques for computer image processing are not robust over large samples of imagery. Natural systems seem unaffected by variation in local illumination and textures which interfere with conventional analysis. While change detection algorithms have been partially successful, many important tasks like extraction of roads and communication lines remain unsolved. The solution to these problems may lie in examining architectures and algorithms used by biological imaging systems. Pulsed oscillatory neural network design, based on biomemetics, seem to solve some of these problems. Pulsed oscillatory neural networks are examined for application to image analysis and segmentation of multispectral imagery from the Satellite Pour l'Observation de la Terre. Using biological systems as a model for image analysis of complex data, a pulsed coupled networks using an integrate and fire mechanism is developed. This architecture, based on layers of pulsed coupled neurons is tested against common image segmentation problems. Using a reset activation pulse similar to that generated by sacatic motor commands, an algorithm is developed which demonstrates the biological vision could be based on adaptive histogram techniques. This architecture is demonstrated to be both biologically plausible and more effective than conventional techniques. Using the pulse time-of-arrival as the information carrier, the image is reduced to a time signal, temporal encoding of imagery, which allows an intelligent filtering based on expectation. This technique is uniquely suited to multispectral/multisensor imagery and other sensor fusion problems.
This paper describes a method to compute the optical transfer function, in terms of Zernike polynomials, one coefficient at a time using a neural network and gradiant decent. Neural networks, which are a class of self-tutored non-linear transfer functions, are shown to be appropriate for this problem as a closed form solution does not exist. A neural network provides an approximation to the optical transfer function computed from examples using gradient descent methods. Orthogonality of the Zernike polynomials allow image wavefront aberrations to be described as an ortho-normal set of coefficients. Atmospheric and system distortion of astronomical observations can introduce an unknown phase error with the observed image. This phase distortion can be described by a set of coefficients of the Zernike polynomials. This orthogonality is shown to contribute to the simplicity of the neural network method of computation. Two paradigms are used to determine the coefficient description of the wave front error to provide to a compensation system. The first uses a phase diverse image as input to a feedforward backpropagation network for generation of a single coefficient. The second method requires the transfer function to be computed in the Fourier domain. Architecture requirements are investigated and reported together with saliency determination of each input the the network to optimize computation and system requirements.
In this paper we present (1) the optical system design and operational overview, (2) laboratory evaluation spectra, and (3) a sample of the first observational data taken with HYSAT. The hyperspectral sensor systems which are being developed and whose utility is being pioneered by the Phillips Laboratory are applicable to several important SOI (space object identification), military, and civil applications including (1) spectral signature simulations, satellite model validation, and satellite database observations and (3) simultaneous spatial/spectral observations of booster plumes for strategic and surrogate tactical missile signature identification. The sensor system is also applicable to a wide range of other applications, including astronomy, camouflage discrimination, smoke chemical analysis, environmental/agricultural resource sensing, terrain analysis, and ground surveillance. Only SOI applications will be discussed here.
The main thrust of this paper is to encourage the use of neural networks to process raw data for subsequent classification. This article addresses neural network techniques for processing raw pixel information. For this paper the definition of neural networks includes the conventional artificial neural networks such as the multilayer perceptrons and also biologically inspired processing techniques. Previously, we have successfully used the biologically inspired Gabor transform to process raw pixel information and segment images. In this paper we extend those ideas to both segment and track objects in multiframe sequences. It is also desirable for the neural network processing data to learn features for subsequent recognition. A common first step for processing raw data is to transform the data and use the transform coefficients as features for recognition. For example, handwritten English characters become linearly separable in the feature space of the low frequency Fourier coefficients. Much of human visual perception can be modelled by assuming low frequency Fourier as the feature space used by the human visual system. The optimum linear transform, with respect to reconstruction, is the Karhunen-Loeve transform (KLT). It has been shown that some neural network architectures can compute approximations to the KLT. The KLT coefficients can be used for recognition as well as for compression. We tested the use of the KLT on the problem of interfacing a nonverbal patient to a computer. The KLT uses an optimal basis set for object reconstruction. For object recognition, the KLT may not be optimal.
This paper describes the NeuralGraphics software environment used to run interactive neural network training experiments. The NeuralGraphics environment is a collection of software tools, graphical displays, and demonstrations that allow users to easily adapt many of the current neural network paradigms to their particular classification problem. The paper discusses the paradigms implemented in the NeuralGraphics environment as well as the data files required to train and test the learning capability of selected neural networks.
Spectral analysis involving the determination of atomic and molecular species present in a spectm of multi—spectral data is a very time consulTLLng task, especially considering the fact that there are typically thousands of spectra collected during each experiment. Ixie to the overwhelming amount of available spectral data and the time required to analyze these data, a robust autorratic method for doing at least some preliminary spectral analysis is needed. This research focused on the development of a supervised artificial neural network with error correction learning, specifically a three—layer feed-forward backpropagation perceptron. The obj ective was to develop a neural network which would do the preliminary spectral analysis and save the analysts from the task of reviewing thousands of spectral frames . The input to the network is raw spectral data with the output consisting of the classification of both atomic and molecular species in the source.
Modeling of artificial neural networks is shown to depend on the programming decisions made in constructing the algorithms in software. Derivation of a common neural network training rule is shown including the effect of programming constraints. A method for constructing large scale neural network models is presented which allows for efficient use of memory hardware and graphics capabilities. Software engineering techniques are discussed in terms of design methodologies. Application of these techniques is considered for large scale problems including neural network segmentation of digital imagery for target identification. 1.
This paper will review recent advances in the applications of artificial neural network technology to problems in automatic target recognition. The application of feedforward networks for segmentation feature extraction and classification of targets in Forward Looking Infrared (FLIR) and laser radar range scenes will be presented. Biologically inspired Gabor functions will be shown to be a viable alternative to heuristic image processing techniques for segmentation. The use of local transforms such as the Gabor transform fed into a feedforward network is proposed as an architecture for neural based segmentation. Techniques for classification of segmented blobs will be reviewed along with neural network procedures for determining relevant features. A brief review of previous work on comparing neural network based classifiers to conventional Bayesian and K-nearest neighbor techniques will be presented. Results from testing several alternative learning algorithms for these neural network classifiers are presented. A technique for fusing information from multiple sensors using neural networks is presented and conclusions are made. 1