Biological vision systems can perform target selection, pattern recognition, and dynamic range adaptation at capability levels far beyond that of human-designed methods. This paper applies a two-stage Biologically-Inspired Vision (BIV) model for image pre-processing and infrared tone remapping, derived from the visual pipeline of the hover y. The first stage performs spatially invariant, pixel-wise, intensity normalization, to intelligently compress scene dynamic range and enhance local contrasts using an adaptive gain control mechanism. The second stage of the model applies adaptive spatio-temporal filtering to reduce redundancy within image sequences. Our experiments demonstrate the strengths of the model on four practical tasks. For large targets, the model acts as a sophisticated edge extractor. The examples show the ability to retrieve the complete structure of a boat from sea clutter, increasing the global contrast factor by 165%. Secondly and thirdly, for small and weak-signature targets, segmentation is demonstrated. A filter is applied to track a 2x2 pixel dragon y without interruption, and a small maritime vessel, extracted as it passes in front of a larger vessel of similar emissivity. Finally, the power of the BIV model to rapidly compress dynamic range and normalize sudden changes in scene luminance is validated by means of incandescent pyrotechnic pellets launched from an aerial platform.
We have developed a numerical model of Small Target Motion Detector neurons, bio-inspired from electrophysiological experiments in the fly brain. These neurons respond selectively to small moving features within complex moving surrounds. Interestingly, these cells still respond robustly when the targets are embedded in the background, without relative motion cues. This model contains representations of neural elements along a proposed pathway to the target-detecting neuron and the resultant processing enhances target discrimination in moving scenes. The model encodes high dynamic range luminance values from natural images (via adaptive photoreceptor encoding) and then shapes the transient signals required for target discrimination (via adaptive spatiotemporal high-pass filtering). Following this, a model for Rectifying Transient Cells implements a nonlinear facilitation between rapidly adapting, and independent polarity contrast channels (an 'on' and an 'off' pathway) each with center-surround antagonism. The recombination of the channels results in increased discrimination of small targets, of approximately the size of a single pixel, without the need for relative motion cues. This method of feature discrimination contrasts with traditional target and background motion-field computations. We improve the target-detecting output with inhibition from correlation-type motion detectors, using a form of antagonism between our feature correlator and the more typical motion correlator. We also observe that a changing optimal threshold is highly correlated to the value of observer ego-motion. We present an elaborated target detection model that allows for implementation of a static optimal threshold, by scaling the target discrimination mechanism with a model-derived velocity estimation of ego-motion.
Traditional approaches to calculating self-motion from visual information in artificial devices have generally relied on
object identification and/or correlation of image sections between successive frames. Such calculations are
computationally expensive and real-time digital implementation requires powerful processors. In contrast flies arrive at
essentially the same outcome, the estimation of self-motion, in a much smaller package using vastly less power. Despite
the potential advantages and a few notable successes, few neuromorphic analog VLSI devices based on biological vision
have been employed in practical applications to date. This paper describes a hardware implementation in aVLSI of our
recently developed adaptive model for motion detection. The chip integrates motion over a linear array of local motion
processors to give a single voltage output. Although the device lacks on-chip photodetectors, it includes bias circuits to
use currents from external photodiodes, and we have integrated it with a ring-array of 40 photodiodes to form a visual
rotation sensor. The ring configuration reduces pattern noise and combined with the pixel-wise adaptive characteristic of
the underlying circuitry, permits a robust output that is proportional to image rotational velocity over a large range of
speeds, and is largely independent of either mean luminance or the spatial structure of the image viewed. In principle,
such devices could be used as an element of a velocity-based servo to replace or augment inertial guidance systems in
applications such as mUAVs.
The range of luminance levels in the natural world varies in the order of 10<sup>8</sup>, significantly larger than the 8-bits
employed by most digital imaging systems. To overcome their limited dynamic range traditional systems rely on the fact
that the dynamic range of a scene is typically much lower, and by adjusting a global gain factor (shutter speed) it is
possible to acquire usable images. However in many situations 8-bits of dynamic range is insufficient, meaning
potentially useful information, lying outside of the dynamic range of the device, is lost. Traditional approaches to
solving this have involved using nonlinear gamma tables to compress the range, hence reducing contrast in the digitized
scene, or using 16-bit imaging devices, which use more bandwidth and are incompatible with most recording media and
software post-processing techniques. This paper describes an algorithm, based on biological vision, which overcomes
many of these problems. The algorithm reduces the redundancy of visual information and compresses the data observed
in the real world into a significantly lower bandwidth signal, better suited for traditional 8-bit image processing and
display. However, most importantly, no potentially useful information is lost and the contrast of the scene is enhanced in
areas of high informational content (where there are changes) and reduced in areas containing low information content
(where there are no changes). Thus making higher-order tasks, such as object identification and tracking, easier as
redundant information has already been removed.
Insects with their amazing visual system are able to perform exceptional navigational feats. In order to
understand how they perform motion detection and velocity estimation, much work has been done in the past
40 years and many models of motion detection have been proposed. One of the earliest and most prominent
models is the Reichardt correlator model. We have elaborated the Reichardt correlator model to include
additional non-linearities that mimic known properties of the insect motion pathway, including logarithmic
encoding of luminance and saturation at various stages of processing. In this paper, we compare the response
of our elaborated model with recordings from fly HS neurons to naturalistic image panoramas. Such responses
are dominated by noise which is largely non-random. Deviations in the correlator response are likely due to
the structure of the visual scene, which we term <i>Pattern noise</i>. Pattern noise is investigated by implementing
saturation at different stages in our model and comparison of each of these models with the physiological data
from the fly is performed using cross covariance technique.
This paper describes the implementation of a robust adaptive photodetector circuit that mimics the characteristics of insect photoreceptors. The implementation of the photodetector circuit is an elaborated version of the mathematical model initially developed by van Hateren and Snippe. It consists of a linear photodetector, two divisive feedback loops and a static non-linearity stage. The photoreceptor circuit was rigorously tested under both steady-state and dynamic (natural scenes) conditions and the circuit parameters optimized such that the output was highly correlated to results obtained from fly photoreceptors observing an identical stimulus. The results show that this adaptive non-linear photoreceptor circuit is ideally suited to mimic the biological photoreceptors found in insects.