Naturally-occurring sensory signal processing algorithms, such as those that inspired fuzzy-logic control, can be
integrated into non-naturally-occurring high-performance technology, such as programmable logic devices, to realize
novel bio-inspired designs. Research is underway concerning an investigation into using field programmable logic
devices (FPLD's) to implement fuzzy logic sensory processing. A discussion is provided concerning the commonality
between bio-inspired fuzzy logic algorithms and coarse coding that is prevalent in naturally-occurring sensory systems.
Undergraduate design projects using fuzzy logic for an obstacle-avoidance robot has been accomplished at our
institution and other places; numerous other successful fuzzy logic applications can be found as well. The long-term
goal is to leverage such biomimetic algorithms for future applications. This paper outlines a design approach for
implementing fuzzy-logic algorithms into reconfigurable computing devices. This paper is presented in an effort to
connect with others who may be interested in collaboration as well as to establish a starting point for future research.
Throughout the animal kingdom there are many existing sensory systems with capabilities desired by the human designers of new sensory and computational systems. There are a few basic design principles constantly observed among these natural mechano-, chemo-, and
photo-sensory systems, principles that have been proven by the test of time. Such principles include non-uniform sampling and processing, topological computing, contrast enhancement by localized signal inhibition, graded localized signal processing, spiked signal transmission, and coarse coding, which is the computational transformation of raw data using broadly overlapping filters. These principles are outlined here with references to natural biological sensory systems as well as successful biomimetic sensory systems exploiting these natural design concepts.
Current test and evaluation methods are not adequate for fully assessing the operational performance of imaging infrared sensors while they are installed on the weapon system platform. The use of infrared (IR) scene projection in test and evaluation will augment and redefine test methodologies currently being used to test and evaluate forward looking infrared (FLIR) and imaging IR sensors. The Mobile Infrared Scene Projector (MIRSP) projects accurate, dynamic, and realistic IR imagery into the entrance aperture of the sensor, such that the sensor would perceive and respond to the imagery as it would to the real-world scenario. The MIRSP domain of application includes development, analysis, integration, exploitation, training, and test and evaluation of ground and aviation based imaging IR sensors/subsystems/systems. This applies to FLIR systems, imaging IR missile seekers/guidance sections, as well as non-imaging thermal sensors. The MIRSP Phase I, 'pathfinder' has evolved from other scene projector systems, such as the Flight Motion Simulator Infrared Scene Projector (FIRSP) and the Dynamic Infrared Scene Projector (DIRSP). Both of these projector systems were designed for laboratory test and evaluation use rather than field test and evaluation use. This paper will detail the MIRSP design to include trade-off analysis performed at the system/subsystem levels. The MIRSP Phase II will provide the capability to test and evaluate various electro-optical sensors on weapon platform. The MIRSP Phase I and II will be advancing current IR scene projector technologies by exploring other technologies such as mobility/transportability, packaging, sensors, and scene generation.
Testing advanced weapons systems, like the Comanche helicopter, has always presented technical challenges to the Test and Evaluation (T&E) community. Because these weapon systems are on the cutting edge of technology, it is the tester's responsibility to develop the tools and techniques to fully exercise a new weapon system's capability. As with most testing, state-of-the-art tools which provide test stimuli that matches or exceeds the fidelity of the systems under test must be developed. One such tool under development to test FLIR senors is the Mobile Infrared Scene Projector (MIRSP). This paper will investigate current plans to support the T&E of the Comanche FLIR sensor during SIL testing. Planning the T&E usage of the MIRSP has involved identifying limitations, both in hardware and software, and determining how to minimize the effects of these limitations or proposing solutions to correct these limitations. The final result of this effort is to maximize the operational effectiveness of the MIRSP in order to benefit T&E of all FLIR sensors in the future.
Coarse-coding is the transformation of raw data using a small number of broadly overlapping filters. These filters may exist in time, space, color, or other information domains. Inspired by models of natural vision processing, intensity and color information has been previously encoded and successfully decoded using coarse coding. The color and intensity of objects within test images were successfully retrieved after passing through only two coarse filters arranged in a checkerboard fashion. It was shown that a consequence of such a filter is a natural edge enhancement of the objects within the image. Coarse-coding is considered here in a signal processing frequency domain and in a sensory spectral filtering domain. Test signals include single frequency, multiple frequency, and signals with broad frequency content. Gaussian-based filters are used to discriminate between different signals of arbitrary frequency content. The effects of Gaussian shape changes and spectral contrasting techniques are demonstrated. Consequences in filter parameter selection are further discussed.
The theory and application of morphological associative memories and morphological neural networks in general are emerging areas of research in computer science. The concept of a morphological associative memory differs from a more conventional associative memory by the nonlinear functionality of the synaptic connection. By taking the maximum of sums instead of the sum of products, morphological network computation is inherently nonlinear. Hence, the morphological associative memory does not require any ad hoc methodology to interject a nonlinear state. In this paper, we introduce a very large scale integration analog circuit design that describes the nonlinear functionality of the synaptic connection. We specifically describe the fundamental circuit needed to implement a basic additive maximum associative memory, and describe noise conditions under which this memory will perform flawlessly. As a potential application, we propose the use of the analog circuit to real-time operation on or near a focal plane array sensor.
Inspired by models of natural vision processing, intensity and color information are encoded and successfully decoded using coarse coding. Primate photoreceptors are known to include one rod and three cones, each with a unique spectral absorption curve. Although the curves overlap significantly, vision systems are capable of incredible chromatic acuity and spatial luminous acuity. A proof of concept is demonstrated here using simulated absorption curves and an algorithm representing a cursory model of vision processing. The color and intensity of objects within test images are successfully retrieved after passing through only two coarse filters arranged in a checkerboard fashion. A consequence of such a filter is a natural edge enhancement of the objects within the image.
The wavelet transform dilates and translates a selected fundamental wavelet. Selective sampling of the continuous wavelet transform identifies discrete components used as a basis for signal projections. Similarly, some properties of early vision may be descried in terms of dilations and translations of fundamental waveforms. Examples include the optical point spread function, spectral absorption curves of photoreceptors, receptive fields of photoreceptors. Receptive fields of post-receptor cells, and eye movements. These vision features are described with respect to the dilation and translation of candidate waveforms. Spatial, temporal, and chromatic filtering in the vision pathways are also describe with respect to similarities with wavelet subband analysis.
Retinal processing is known to condense space, time, and color information into three basic channels known as the rod, magnocellular, and parvoceliular channels. The magnocellular channel executes a spatial band-pass filter in the lower end of the spatial frequency spectrum, and the parvocellular channel executes a spatial band-pass filter in the higher end of the spectrum. In an analogous fashion, conventional wavelet analysis requires separate high-pass and low-pass filtering operations on data. Previous retinal designs have provided these filtering operations seen in natural processors. The rationale for such filters is presented along with concepts for implementing high-speed analog wavelet analyzers. These concepts are built on existing understanding of vision processing and previously demonstrated analog retinal design chips.
Biological vision systems of higher life forms naturally divide space, time, and color domains into a relatively few bandpass components. In the spatial domain, the division is primarily into a low frequency bandpass channel and a high frequency bandpass channel. Wavelet analysis also divides input into low and high band representations. Chips originally designed to exploit filtering functionality of biological retinas can also be used to perform fast analog decomposition of imagery into subsequent vision wavelet components. These filtering concepts are presented in connection to previously developed retinal processors and compared to conventional wavelet filters. Although perfect reconstruction is not performed by biological systems, it is used here as a metric for measuring level of information corruption inherent in biological filter models.
Two processors are considered for image quadrature mirror filtering (QMF). The neuromorphic infrared focal-plane sensor (NIFS) is an existing prototype analog processor offering high speed spatio-temporal Gaussian filtering, which could be used for the QMF low- pass function, and difference of Gaussian filtering, which could be used for the QMF high- pass function. Although not designed specifically for wavelet analysis, the biologically- inspired system accomplishes the most computationally intensive part of QMF processing. The Texas Instruments (TI) TMS320C80 Multimedia Video Processor (MVP) is a 32-bit RISC master processor with four advanced digital signal processors (DSPs) on a single chip. Algorithm partitioning, memory management and other issues are considered for optimal performance. This paper presents these considerations with simulated results leading to processor implementation of high-speed QMF analysis and synthesis.
This paper discusses two applications of adaptive filters for image processing on parallel architectures. The first, based on the results of previously accomplished work, summarizes the analyses of various adaptive filters implemented for pixel-level image prediction. FIR filters, fixed and adaptive IIR filters, and various variable step size algorithms were compared with a focus on algorithm complexity against the ability to predict future pixel values. A gaussian smoothing operation with varying spatial and temporal constants were also applied for comparisons of random noise reductions. The second application is a suggestion to use memory-adaptive IIR filters for detecting and tracking motion within an image. Objects within an image are made of edges, or segments, with varying degrees of motion. An application has been previously published that describes FIR filters connecting pixels and using correlations to determine motion and direction. This implementation seems limited to detecting motion coinciding with FIR filter operation rate and the associated harmonics. Upgrading the FIR structures with adaptive IIR structures can eliminate these limitations. These and any other pixel-level adaptive filtering application require data memory for filter parameters and some basic computational capability. Tradeoffs have to be made between chip real estate and these desired features. System tradeoffs will also have to be made as to where it makes the most sense to do which level of processing. Although smart pixels may not be ready to implement adaptive filters, applications such as these should give the smart pixel designer some long range goals.
Digital technology has made dramatic strides in providing higher signal processing throughput in smaller packages. However, meeting processor throughput requirements for future imagers will be difficult using current or near-term available digital technology. As fabrication technology allows for more dense focal planes, typical image processing throughput requirements per frame increase with the number of pixels; for example, the number of pixel gain and offset computations quadruple when the focal plane size increases from a 128 X 128 to a 256 X 256 array. Several analog technologies are answering the driving throughput requirements for image processing applications; one such technology is the movement of charge packets through a piezoelectric GaAs channel, called acoustic charge transport (ACT). Analog solutions such as ACT offer orders of magnitude higher throughput imagers than A/D conversions and digital processing approaches. This paper includes (1) an overview of the theory of ACT device operation, (2) currently available ACT technology and devices, and (3) computational benefits. Detailed proprietary architectures of current ACt image processor designs are not discussed here.