Adaptive Computing, vs. Classical Computing, is emerging to be a field which is the culmination during the last 40 and more years of various scientific and technological areas, including cybernetics, neural networks, pattern recognition networks, learning machines, selfreproducing automata, genetic algorithms, fuzzy logics, probabilistic logics, chaos, electronics, optics, and quantum devices. This volume of "Critical Reviews on Adaptive Computing: Mathematics, Electronics, and Optics" is intended as a synergistic approach to this emerging field. There are many researchers in these areas working on important results. However, we have not seen a general effort to summarize and synthesize these results in theory as well as implementation. In order to reach a higher level of synergism, we propose Adaptive Computing as the field which comprises of the above mentioned computational paradigms and various realizations. The field should include both the Theory (or Mathematics) and the Implementation. Our emphasis is on the interplay of Theory and Implementation. The interplay, an adaptive process itself, of Theory and Implementation is the only "holistic" way to advance our understanding and realization of brain-like computation. We feel that a theory without implementation has the tendency to become unrealistic and "out-of-touch" with reality, while an implementation without theory runs the risk to be superficial and obsolete.
Systems that select an optimal or nearly optimal member from a specified search set are reviewed with special emphasis on stochastic approaches such as simulated annealing, genetic algorithms, as well as other probabilistic heuristics. Because of local minima, selecting a global optimum may require time that increases exponentially in the problem size. Stochastic search provides advantages in robustness, generality, and simplicity over other approaches and is more efficient than exhaustive deterministic search.
Complex and chaotic systems are known to be essential in natural phenomena. Chaos and complex, long transients are as common as fixed points, periodic orbits, and limit cycles in continuous dynamical systems. In this paper, we provide an overview of complex and chaotic systems, and their computation. We are interested in understanding of the behavior in general and building future-generation computers using these systems.
A pulse-coupled neural network is shown to contain invariant spatial information in the phase structure of the output pulse trains. Two time scales are identified. On the fast time scale the linking produces dynamic, periodic, fringe-like traveling waves. The slow time scale is set by the pulse generator, and on that scale the image is segmented into multi-neuron time-synchronous groups. These groups, by the same linking mechanism, can form periodic pulse structures whose relative phases encode the location of the groups with respect to one another. The time signals are a unique, object-specific and roughly invariant time signature for their corresponding input spatial image or distribution. The details of the model are discussed, giving the basic linking field model, extensions, generation of time series in the limit of very weak linking, invariances from the symmetries of the receptive fields, time scales, waves, and signatures. Multi-rule logical systems are shown to exist on single neurons. Adaptation is discussed. Hardware implementations, optical and electronic, are reviewed. The conjugate basic problem of transforming a time signal into a spatial distribution is discussed.
Ordinarily, functional complexity in neural networks is held as stemming from the interaction of large numbers of functionally simple neuron-like processing elements. This paper focuses on complexity on the single neuron level as elucidate by a nonlinear dynamical systems approach to the analysis of the integrate-and-fire model neuron. The resulting dynamics, described by an iterated phase-transition map (PTM), suggest that a wide-range of complex firing modalities can be produced by a dendritic neuron when its dendrites are subjected to correlated arriving spike trains that give rise to periodic activation potential of its excitable membrane. The dynamical approach leads to the bifurcating neuron concept and model which combines functional complexity, in its spiking behavior, approaching that of the biological neuron with structural simplicity and power efficiency. The bifurcating model neuron is well suited for the modeling, simulation, and construction of a new generation of artificial neural networks in which synchronicity, bifurcation, and chaos can play a role in realizing higher-level functions. The theory and characterization of a photonic embodiment of the bifurcating neuron are discussed and it is proposed that bifurcating neuron dynamics offer a plausible basis for the mechanism subserving transient correlations in local field potentials observed at different widely separated cortical areas of Cat and Monkey.
This paper discusses digital electronic VLSI architectures for emulating neural networks. The major advantage of digital implementation is its flexibility, which, because of “Amdahl’s Law,” is more valuable than raw speed. As an example of a digital architecture, Adaptive Solution’s CNAPS1, architecture is discussed in detail. CNAPS consists of a single-instruction, multiple-data (SIMD) or "data parallel” array of simple DSP-like processor nodes. By using low-precision arithmetic, an optimized processor architecture, and simple broadcast communication, many processors can fit on a one silicon chip, thus allowing cost-effective, high-performance computation for image processing and pattern recognition applications.
The last half of the paper discusses mapping several algorithms to the CNAPS architecture. Algorithms discussed include back-propagation, Fourier transforms, JPEG image compression, and convolution.
The physics and the mathematics of computation are examined to provide a foundation and perspective for the investigation of the quantum mechanics of computation. Our purpose is to explore the fundamental limits and constraints imposed on computation by Nature through the laws of physics and the mathematics of computational complexity. Inasmuch as information storage and transmission are an integral part of computation, their physical bounds are considered. The computer is viewed both physically and mathematically as a dynamical system, and is depicted in terms of the basic Turing machine paradigm. Three fundamental classes of the Turing machine are defined; the deterministic, stochastic and quantum Turing machines. Hamiltonian models and physical realizations of quantum computing are described. Quantum computers can perform some tasks which have no classical analogue, but they cannot compute functions that are non-computable by classical means. Some classically intractable problems can be solved with quantum computers.
The trade-off between the number of neurons that can be implemented with a single correlator and the shift invariance that each neuron has is investigated. A new type of correlator implemented with a planar hologram is described whose shift invariance can be controlled by setting the position of the hologram properly. The shift invariance and the capacity of correlators implemented with volume holograms is also investigated.
Genetic algorithms (GAs) are a class of programs that emulate the search processes of natural genetic evolution. This article reviews GA technology. The review begins by recasting search as a process of sampling points from an unknown space. With this “black-box” view of search, the simple mechanics of the GA are shown to have powerful effects. A review of basic GA mechanics and theory is followed by an overview of more advanced GA techniques. Final comments address future directions for GAs and for general evolving systems research.
Two broad approaches to computing are known - connectionist (which includes Turing Machines but is demonstrably more powerful) and selectionist. Human computer engineers tend to prefer the connectionist approach which includes neural networks. Nature uses both but may show an overall preference for selectionism. "Looking back into the history of biology, it appears that whenever a phenomenon resembles learning, an instructive theory was first proposed to account for the underlying mechanisms. In every case, this was later replaced by a selective theory." - N. K. Jeme, Nobelist in Immunology.
I discuss four technologies : fuzzy pattern recognition (numerical and syntactic), computational neural networks, and fuzzy control. I'm not sure what a critical review is, so let me warn you that this is not a survey of recent or important work in any of these fields. Infant technologies usually develop a lot of internal structure before they reach out towards applications in companion fields. The purpose of this article is to assess the maturation of these disciplines by giving two examples of cross fertilization between control and pattern recognition. First, I illustrate the use of pattern recognition - fuzzy clustering and feed forward neural networks - to help develop and represent fuzzy controllers. And second, I give an example of wave form analysis by syntactic pattern recognition that uses fuzzy control logic. I concludes with some ideas about what should happen next - and what may happen next - in terms of hybridization between the four disciplines.
The fussy "roots" of quantum mechanics are traced directly to the Hamilton-Jacobi equation of classical mechanics. It is shown that the Shroedinger equation can be derived from the Hamilton-Jacobi equation. A deep underlying unity of both equations lies in the fact that a unique trajectory of a classical particle is "selected" out of many-continuum paths according to the Principle of least action. We can say that a classical particle has a membership in every path of the above set, which collapses to the winning single trajectory of a real motion.
At the same time it can also be said that a quantum mechanical "particle" has different degrees of membership in a set of many-continuum paths where all of them contribute to the dynamics of the quantum mechanical particle.
This allows one to provide an interpretation of the wave function as a parameter describing deterministic entity endowed by a fuzzy character. As a logical consequence of such an interpretation the complimentarity principle and the wave-particle duality concept can be abandoned in favor of a fuzzy deterministic microobject. This idea leads to a possibility of a quantum mechanical computer based on the fuzzy logic.