We have developed a numerical model of Small Target Motion Detector neurons, bio-inspired from electrophysiological experiments in the fly brain. These neurons respond selectively to small moving features within complex moving surrounds. Interestingly, these cells still respond robustly when the targets are embedded in the background, without relative motion cues. This model contains representations of neural elements along a proposed pathway to the target-detecting neuron and the resultant processing enhances target discrimination in moving scenes. The model encodes high dynamic range luminance values from natural images (via adaptive photoreceptor encoding) and then shapes the transient signals required for target discrimination (via adaptive spatiotemporal high-pass filtering). Following this, a model for Rectifying Transient Cells implements a nonlinear facilitation between rapidly adapting, and independent polarity contrast channels (an 'on' and an 'off' pathway) each with center-surround antagonism. The recombination of the channels results in increased discrimination of small targets, of approximately the size of a single pixel, without the need for relative motion cues. This method of feature discrimination contrasts with traditional target and background motion-field computations. We improve the target-detecting output with inhibition from correlation-type motion detectors, using a form of antagonism between our feature correlator and the more typical motion correlator. We also observe that a changing optimal threshold is highly correlated to the value of observer ego-motion. We present an elaborated target detection model that allows for implementation of a static optimal threshold, by scaling the target discrimination mechanism with a model-derived velocity estimation of ego-motion.
Traditional approaches to calculating self-motion from visual information in artificial devices have generally relied on
object identification and/or correlation of image sections between successive frames. Such calculations are
computationally expensive and real-time digital implementation requires powerful processors. In contrast flies arrive at
essentially the same outcome, the estimation of self-motion, in a much smaller package using vastly less power. Despite
the potential advantages and a few notable successes, few neuromorphic analog VLSI devices based on biological vision
have been employed in practical applications to date. This paper describes a hardware implementation in aVLSI of our
recently developed adaptive model for motion detection. The chip integrates motion over a linear array of local motion
processors to give a single voltage output. Although the device lacks on-chip photodetectors, it includes bias circuits to
use currents from external photodiodes, and we have integrated it with a ring-array of 40 photodiodes to form a visual
rotation sensor. The ring configuration reduces pattern noise and combined with the pixel-wise adaptive characteristic of
the underlying circuitry, permits a robust output that is proportional to image rotational velocity over a large range of
speeds, and is largely independent of either mean luminance or the spatial structure of the image viewed. In principle,
such devices could be used as an element of a velocity-based servo to replace or augment inertial guidance systems in
applications such as mUAVs.
The range of luminance levels in the natural world varies in the order of 108, significantly larger than the 8-bits
employed by most digital imaging systems. To overcome their limited dynamic range traditional systems rely on the fact
that the dynamic range of a scene is typically much lower, and by adjusting a global gain factor (shutter speed) it is
possible to acquire usable images. However in many situations 8-bits of dynamic range is insufficient, meaning
potentially useful information, lying outside of the dynamic range of the device, is lost. Traditional approaches to
solving this have involved using nonlinear gamma tables to compress the range, hence reducing contrast in the digitized
scene, or using 16-bit imaging devices, which use more bandwidth and are incompatible with most recording media and
software post-processing techniques. This paper describes an algorithm, based on biological vision, which overcomes
many of these problems. The algorithm reduces the redundancy of visual information and compresses the data observed
in the real world into a significantly lower bandwidth signal, better suited for traditional 8-bit image processing and
display. However, most importantly, no potentially useful information is lost and the contrast of the scene is enhanced in
areas of high informational content (where there are changes) and reduced in areas containing low information content
(where there are no changes). Thus making higher-order tasks, such as object identification and tracking, easier as
redundant information has already been removed.
Insects with their amazing visual system are able to perform exceptional navigational feats. In order to
understand how they perform motion detection and velocity estimation, much work has been done in the past
40 years and many models of motion detection have been proposed. One of the earliest and most prominent
models is the Reichardt correlator model. We have elaborated the Reichardt correlator model to include
additional non-linearities that mimic known properties of the insect motion pathway, including logarithmic
encoding of luminance and saturation at various stages of processing. In this paper, we compare the response
of our elaborated model with recordings from fly HS neurons to naturalistic image panoramas. Such responses
are dominated by noise which is largely non-random. Deviations in the correlator response are likely due to
the structure of the visual scene, which we term Pattern noise. Pattern noise is investigated by implementing
saturation at different stages in our model and comparison of each of these models with the physiological data
from the fly is performed using cross covariance technique.
Insects have very efficient vision algorithms that allow them to perform complex manoeuvres in real time, while using a very limited processing power. In this paper we study some of the properties of these algorithms with the aim of implementing them in microchip devices. To achieve this we simulate insect vision using our software, which utilises the Horridge Template Model, to detect the angular velocity of a moving object. The motion is simulated using a number of rotating images showing both artificial constructs and real life scenes and is captured with a CMOS camera. We investigate the effects of texel density, contrast, luminance and chrominance properties of the moving images. Pre and post template filtering and different threshold settings are used to improve the accuracy of the estimated angular velocity. We then further analyse and compare the results obtained. We will then implement an efficient velocity estimation algorithm that produces reliable results. Lastly, we will also look into developing the estimation of time to impact algorithm.
The insect visual system, with its simplicity and efficiency has gained widespread attention and many biologically inspired models are being used for motion detection and velocity estimation tasks. One of the earliest and most efficient models among them is the Reichardt correlator model. In this paper, we have elaborated the basic Reichardt correlator to include spatial and temporal pre-filtering and additional non-linearites which are believed to be present in the fly visual system to develop a simple yaw sensor. We have used just 16 elaborated EMDs and it is seen that this sensor can detect rotational motion at angular velocities up to several thousand degrees per second. The modelling of these sensors make us realize that the VLSI implementation of such simple detectors can have varied applications for flight control in different fields.
This paper describes the implementation of a robust adaptive photodetector circuit that mimics the characteristics of insect photoreceptors. The implementation of the photodetector circuit is an elaborated version of the mathematical model initially developed by van Hateren and Snippe. It consists of a linear photodetector, two divisive feedback loops and a static non-linearity stage. The photoreceptor circuit was rigorously tested under both steady-state and dynamic (natural scenes) conditions and the circuit parameters optimized such that the output was highly correlated to results obtained from fly photoreceptors observing an identical stimulus. The results show that this adaptive non-linear photoreceptor circuit is ideally suited to mimic the biological photoreceptors found in insects.
The visual pathway that leads from the retina to the tangential cells in the third optical ganglion of the fly is a sophisticated system for the detection of visual motion. The tangential cells, whose responses are thought to characterize the state of egomotion of the animal, show a remarkable ability to encode velocity information about optic flow patterns to which they are sensitive, independent of the structure and contrast of viewed scenery. We describe a simulation study based on a model that accounts for key physiological features observed in the biological system, which contains nonlinear features that we expect to contribute to this capability. One of these features is motion adaptation, a phenomenon on which recent research has shed new light. We conclude that our models significantly reduce dependence of response on variable natural scenery, although they still do not perform as well in this respect as the biological neurons. This biological system has inspired an implementation of visual motion processing in analog VLSI technology. The neuromorphic circuits are intended for eventual on- or near-focal plane integration with photosensing. We describe the design approach and present results from preliminary versions of these circuits.
Insects perform highly complicated navigational tasks even though their visual system is relatively simple. The main idea of work in this area is to study the visual system of insects and to incorporate algorithms used by them in electronic circuits to produce
low power, computationally simple, highly efficient, robust devices capable of accurate motion detection and velocity estimation. The Reichardt correlator model is one of the earliest and the most prominent biologically inspired models of motion detection developed by Hassentein and Reichardt in 1956. In an attempt to get accurate estimates of yaw velocity using an elaborated Reichardt correlator, we have investigated the effect of pattern noise (deviation of the correlator output resulting from the structure of the visual scene) on the correlator response. We have tested different sampling methods here and it is found that a circular sampled array of elementary motion detectors (EMDs) reduces pattern noise effectively compared to an array of rectangular or randomly selected EMDs for measuring rotational motion.
An adaptive non-linear photodetector circuit is implemented using electronic discrete components to describe the response of blowfly photoreceptor cells. The photodetector circuit consists of a cascade of a linear photodetector, two divisive feedback loops and a static non-linearity stage. The circuit is rigorously evaluated using an ultra bright Light Emitting Diode. Detailed comparison is done between the photodetector circuit and the actual neurobiological data of the blowfly photoreceptor cells to fine tune the parameters of the circuit.
Motion detection and velocity estimation systems based on the
study of insects tries to emulate the extraordinary visual system
of insects with the aim of coming up with low power, computationally simple, highly efficient and robust devices. The Reichardt correlator model is one of the earliest and the most prominent models of motion detection based on insect vision. In this paper we try to extend the Reichardt correlator model to include an additional non-linearity which has been seen to be present in the fly visual system and we study its effect on the contrast dependance of the response and also try to understand its influence on pattern
noise. Experiments are carried out by adding this compressive non-linearity at different positions in the model as has been postulated by previous works and comparison of the physiological data with
modelling results is done.
Insects have a very efficient visual system that helps them to
perform extraordinarily complicated navigational acts and
precisely controlled aerobatic flight. Physiological evidence
suggests that flight control is guided by a small system of
'tangential' neurons tuned to very specific types of complex
motion by the way that they collate information from local motion
detectors. One class of tangential neurons, the 'horizontal
system' (HS) neurons, respond with opponent graded responses to
yaw stimuli. Using the results of physiological experiments, we
have developed a model, based on an array of Reichardt correlators, for the receptive field of HS neurons that view optical flow along the equator. Our model incorporates additional non-linearities that mimic known properties of the insect motion pathway, including logarithmic encoding of luminance, saturation and motion adaptation (adaptive gain-control). In this paper, we compare the response of our elaborated model with fly HS neuron responses to naturalistic image panoramas. Such responses are dominated by noise which is largely non-random. Deviations in the correlator response are likely due to the structure of the visual scene, which we term "Pattern noise". To investigate the influence of anisotropic features in producing pattern noise, we presented a panoramic image at various initial positions, and versions of the same image modified to disrupt vertical contours. We conclude that the response of the fly neurons shows evidence of local saturation at key stages in the motion pathway. This saturation reduces the effect of pattern noise and improves the coding of velocity. Our model provides an excellent basis for the development of biomimetic yaw sensors for robotic applications.
Flying insects are capable of performing complex and extremely diffcult navigational tasks at high speeds with
amazing ability. The neural computations underlying these complicated maneuvers and the motor activity of
the insects have been extensively investigated in the last few decades.1-5 One the most important discovery
was that the motion detectors involved in the control of the optomotor responses are of the correlation type.6
In order to improve the velocity estimation by the Reichardt correlators, many scientists have come up with
different kinds of elaborations to the basic Reichardt correlator model.
In this paper, we have expanded the Dror’s elaborated Reichardt model7 and we have included feedback
adaptation and saturation in our model and we have conducted a comparative study on the effects of the
addition of each elaboration on the performance of the model. The relative error in each case is also studied.
Insects are blessed with a very efficient yet simple visual system which enable them to navigate with great ease and accuracy. Though a lot has been done in the field of insect vision, there is still not a clear understanding of how velocity is determined in biological vision systems. The dominant model for insect motion detection, first proposed by Hassentein and Reichardt in 1956 has gained widespread acceptance in the invertebrate vision community. The template model, proposed later by Horridge in 1990, permits simple tracking techniques and lends itself easily to both hardware and software. Analysis and simulation by Dror suggest that the inclusion of additional system components to perform pre-filtering, response compression, integration and adaptation, to a basic Reichardt correlator can make it less sensitive to contrast and spatial structure thereby providing a more robust estimate of local image velocity. It was found from the data obtained, from the intracellular recordings of the steady state responses of wide field neurons in the hoverfly Volucella, that the shape of the curves obtained, agreed perfectly with the theoretical predictions made by Dror. In order to compare it with the template model, an experiment was done to get the velocity response curves of the template model using the same image statistics. The results leads us to believe that the fly motion detector emulates a modified Reichardt correlator.
The study of insect vision is believed to provide a key solution to many different aspects of motion detection and velocity estimation. The main reason for this is that motion detection in the fly is extremely fast, with computations requiring only a few milliseconds. So the insect visual system serves as the basis for many models of motion detection. The earliest and the most prominent model is the Reichardt correlator model. But it is found that in the absence of additional system components, the response of a simple Reichardt correlator model is dependent on contrast and spatial frequency. Dror has demonstrated in his work that the addition of spatial and temporal filtering, saturation, integration and adaptation in a correlator based system can make it act as a reliable velocity estimator.
In this paper, we try to further investigate and expand his model to improve the correlator performance. Our recent neurobiological experiments suggest that adaptive mechanisms decrease EMD (elementary motion detector) dependence on pattern contrast and improve reliability. So appropriate modelling of an adaptive feedback mechanism is done to normalise contrast of input signals.
Visual detection and processing of motion in insects is thought to occur based on an elementary delay-and-correlate operation at an early stage in the visual pathway. The correlational elementary motion detector (EMD) indicates the presence of moving stimuli on the retina and is directionally sensitive, but it is a complex spatiotemporal filter and does not inherently encode important motion parameters such as velocity. However, additional processing, in combination with natural visual stimuli, may allow computation of useful motion parameters. One such feature is adaptation in response to motion, until recently thought to occur by modification of the delay time constant, but now shown to arise due mainly to adjustment of contrast gain. This adaptation renders EMD output less dependent on scene contrast and enables it to carry some velocity information. We describe an ongoing effort to characterize this system in engineering terms, and to implement an analog VLSI model of it. Building blocks for a correlational EMD, and a mechanism for computing and implementing adjustment of contrast gain are described. This circuitry is intended as front-end processing for classes of higher-level visual motion computation also performed by insects, including estimation of egomotion by optical flow, and detection of moving targets.
With a visual system that accounts for as much as 30% of the lifted mass, flying insects such as dragonflies and hoverflies invest more in vision than any other animal. Impressive visual performance is subserved by a surprisingly simple visual system. In a typical insect eye, between 2,000 and 30,000 pixels in the image are analyzed by fewer than 200,000 neurons in underlying neural circuits. The combination of sophisticated visual processing with an approachable level of complexity has made the insect visual system a leading model for biomimetic approaches to computer vision. Much neurobiological research has focused on neural circuits used for detection of moving patterns (e.g. optical flow during flight) and moving targets (e.g. prey). Research from several labs has led to great advances in our understanding of the neural mechanisms involved, and has spawned neuromorphic hardware based on key processes identified in neurobiological experiments. Despite its attractions, the highly non-linear nature of several key stages in insect visual processing presents a challenge to understanding. I will describe examples of adaptive elements of neural circuits in the fly visual system which analyze the direction and velocity of wide-field optical flow patterns and the result of experiments that suggest that these non-linearities may contribute to robust responses to natural image motion.