We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.
The intensity of analog stimuli, such as the loudness of sounds, is converted by our biological sensory systems
into short duration electrical pulses in nerve fibres. These pulses are known as action potentials. In many cases,
the transduction process that converts stimulus intensity into an action-potential encoding introduces significant
randomness that appears to reduce the quality of the encoding. Due to this inherent random noise, it is the
average rate at which action potentials are produced, rather than the instantaneous rate, that encodes stimulus
amplitude. In this paper the limits of performance of this transduction process are analyzed using an information
theoretic perspective of neural rate coding.
Pooling networks are composed of noisy independent neurons that all noisily process the same information in
parallel. The output of each neuron is summed into a single output by a fusion center. In this paper we study
such a network in a detection or discrimination task. It is shown that if the network is not properly matched to
the symmetries of the detection problem, the internal noise may restore at least partially some kind of optimality.
This is shown for both (i) noisy threshold model neurons, as well as (ii) Poisson neuron models. We also study an
optimized version of the network, mimicking the notion of excitation/inhibition. We show that, when properly
tuned, the network may reach optimality in a very robust way. Furthermore, we find in this optimization that
some neurons remain inactive. The pattern of inactivity is organized in a strange branching structure, the
meaning of which remains to be elucidated.
We examine the question of how a population of independently noisy sensory neurons should be configured to
optimize the encoding of a random stimulus into sequences of neural action potentials. For the case where firing
rates are the same in all neurons, we consider the problem of optimizing the noise distribution for a known
stimulus distribution, and the converse problem of optimizing the stimulus for a given noise distribution. This
work is related to suprathreshold stochastic resonance (SSR). It is shown that, for a large number of neurons,
the SSR model is equivalent to a single rate-coding neuron with multiplicative output noise.
Proc. SPIE. 6600, Noise and Fluctuations in Circuits, Devices, and Materials
KEYWORDS: Signal to noise ratio, Interference (communication), Distortion, Nonlinear optics, Signal processing, Amplifiers, Data modeling, Chemical elements, Electronic circuits, Optimization (mathematics)
We investigate the possibility of building linear amplifiers capable of enhancing the Signal-to-Noise and Distortion
Ratio (SNDR) of sinusoidal input signals using simple non-linear elements. Other works have proven that it is
possible to enhance the Signal-to-Noise Ratio (SNR) by using limiters. In this work we study a soft limiter
non-linear element with and without hysteresis. We show that the SNDR of sinusoidal signals can be enhanced
by 0.94 dB using a wideband soft limiter and up to 9.68 dB using a wideband soft limiter with hysteresis. These
results indicate that linear amplifiers could be constructed using non-linear circuits with hysteresis. This paper
presents mathematical descriptions for the non-linear elements using statistical parameters. Using these models,
the input-output SNDR enhancement is obtained by optimizing the non-linear transfer function parameters to
maximize the output SNDR.
Pooling networks of noisy threshold devices are good models for natural networks (e.g. neural networks in some
parts of sensory pathways in vertebrates, networks of mossy fibers in the hippothalamus, . . . ) as well as for
artificial networks (e.g. digital beamformers for sonar arrays, flash analog-to-digital converters, rate-constrained
distributed sensor networks, . . . ). Such pooling networks exhibit the curious effect of suprathreshold stochastic
resonance, which means that an optimal stochastic control of the network exists.
Recently, some progress has been made in understanding pooling networks of identical, but independently
noisy, threshold devices. One aspect concerns the behavior of information processing in the asymptotic limit of
large networks, which is a limit of high relevance for neuroscience applications. The mutual information between
the input and the output of the network has been evaluated, and its extremization has been performed. The
aim of the present work is to extend these asymptotic results to study the more general case when the threshold
values are no longer identical. In this situation, the values of thresholds can be described by a density, rather
than by exact locations. We present a derivation of Shannon's mutual information between the input and output
of these networks. The result is an approximation that relies a weak version of the law of large numbers, and a
version of the central limit theorem. Optimization of the mutual information is then discussed.
A model of a biological sensory neuron stimulated by a noisy analog information source is considered. It is
demonstrated that action-potential generation by the neuron model can be described in terms of lossy compression
theory. Lossy compression is generally characterized by (i) how much distortion is introduced, on average, due
to a loss of information, and (ii) the 'rate,' or the amount of compression. Conventional compression theory is
used to measure the performance of the model in terms of both distortion and rate, and the tradeoff between
each. The model's applicability to a number of situations relevant to biomedical engineering, including cochlear
implants, and bio-sensors is discussed.
Consider a quantization scheme which has the aim of quantizing a signal into N+1 discrete output states. The specification of such a scheme has two parts. Firstly, in the encoding stage, the specification of N unique threshold values is required. Secondly, the decoding stage requires specification of N+1 unique reproduction values. Thus, in general, 2N+1 unique values are required for a complete specification. We show in this paper how noise can be used to reduce the number of unique values required in the encoding stage. This is achieved by allowing the noise to effectively make all thresholds independent random variables, the end result being a stochastic quantization. This idea originates from a form of stochastic resonance known as suprathreshold stochastic resonance. Stochastic resonance occurs when noise in a system is essential for that system to provide its optimal output and can only occur in nonlinear systems--one prime example being neurons. The use of noise requires a tradeoff in performance, however, we show that even very low signal-to-noise ratios can provide a reasonable average performance for a substantial reduction in complexity, and that high signal-to-noise ratios can also provide a reduction in complexity for only a negligible degradation in performance.
It is shown that Suprathreshold Stochastic Resonance (SSR) is
effectively a way of using noise to perform quantization or lossy
signal compression with a population of identical threshold-based
devices. Quantization of an analog signal is a fundamental
requirement for its efficient storage or compression in a digital
system. This process will always result in a loss of quality,
known as distortion, in a reproduction of the original signal. The
distortion can be decreased by increasing the number of states
available for encoding the signal (measured by the rate, or mutual
information). Hence, designing a quantizer requires a tradeoff
between distortion and rate. Quantization theory has recently been
applied to the analysis of neural coding and here we examine the
possibility that SSR is a possible mechanism used by populations
of sensory neurons to quantize signals. In particular, we analyze
the rate-distortion performance of SSR for a range of input SNR's
and show that both the optimal distortion and optimal rate occurs
for an input SNR of about 0 dB, which is a biologically plausible
situation. Furthermore, we relax the constraint that all
thresholds are identical, and find the optimal threshold values
for a range of input SNRs. We find that for sufficiently small
input SNRs, the optimal quantizer is one in which all thresholds
are identical, that is, the SSR situation is optimal in this case.
Technology advances tend to reduce minimum dimensions and source voltages to maintain scaling rules. Both scaling trends make noise more critical, reduce yield and increase device parameter fluctuations. This paper presents a statistical model that permits the study of noise and parameter deviations on gates. Using this model stochastic resonance (SR) is studied both in single devices and arrays for subthreshold and suprathreshold input signals. The SR is measured by the signal-to-noise ratio (SNR) in the time domain and a modified SNR is proposed to take into account all the effects induced by noise in gates. With this measure subthreshold and suprathreshold SR is reviewed. Finally, a discussion of the possibility of considering noise a part of the electronic circuits is presented, suggesting that it could be a solution to some of the emerging problems in future nanotechnologies.
We present an analysis of the use of suprathreshold stochastic resonance for analog to digital conversion. Suprathreshold stochastic resonance is a phenomenon where the presence of internal or input noise provides the optimal response from a system of identical parallel threshold devices such as comparators or neurons. Under the conditions where this occurs, such a system is effectively a non-deterministic analog to digital converter. In this paper we compare the suprathreshold stochastic resonance effect to conventional analog to digital conversion by analysing the rate-distortion trade-off of each.
The phenomenon of noise enhanced signal transfer, or stochastic
resonance, has been observed in many nonlinear systems such as
neurons and ion channels. Initial studies of stochastic resonance
focused on systems driven by a periodic signal, and hence used a
signal to noise ratio based measure for comparison between the
input and output of the system. It has been pointed out that for
the more general case of aperiodic signals other measures are
required, such as cross-correlation or information theoretical
tools. In this paper we present simulation results obtained in a
model neural system driven by a broadband aperiodic signal, and
producing a signal imitating neural spikes. The system is analyzed
by using cross-spectral measures.
Proc. SPIE. 5467, Fluctuations and Noise in Biological, Biophysical, and Biomedical Systems II
KEYWORDS: Interference (communication), Signal to noise ratio, Stochastic processes, Signal detection, Resonance enhancement, Electronic circuits, Signal processing, Neurons, Switching, Systems modeling
Noise is a key factor in information processing systems. This fact will be even more critical in new technologies, as dimensions continue to scale down. New design methodologies tolerant to or even taking advantage of noise need to be considered. In this work the possibility of using stochastic resonance (SR) in electronic circuits is studied. We demonstrate the validity of nearly any kind of perturbing signal in producing a noise resonance, thus extending the stochastic resonance concept. In this paper we have explored stochastic, chaotic, deterministic and coupled noise perturbations. The relationship between input signal and input noise amplitude on the noise resonance regime is analyzed, providing a rule for operation under this situation. Finally, we present a simulation study demonstrating that noise resonance is robust to non-ideal behaviors of non-linear devices. All three facts allow direct use of generalized noise resonance (GNR) in electronic circuits.
Suprathreshold Stochastic Resonance (SSR) is a recently discovered
form of stochastic resonance that occurs in populations of neuron-like devices. A key feature of SSR is that all devices in the population possess identical threshold nonlinearities. It has
previously been shown that information transmission through such a
system is optimized by nonzero internal noise. It is also clear
that it is desirable for the brain to transfer information in an
energy efficient manner. In this paper we discuss the energy efficient maximization of information transmission for the case of
variable thresholds and constraints imposed on the energy available to the system, as well as minimization of energy for the case of a fixed information rate. We aim to demonstrate that under certain conditions, the SSR configuration of all devices having identical thresholds is optimal. The novel feature of this work is that optimization is performed by finding the optimal threshold settings for the population of devices, which is equivalent to solving a noisy optimal quantization problem.
Consider an array of threshold devices, such as neurons or
comparators, where each device receives the same input signal, but
is subject to independent additive noise. When the output from
each device is summed to give an overall output, the system acts
as a noisy Analog to Digital Converter (ADC). Recently, such a
system was analyzed in terms of information theory, and it was
shown that under certain conditions the transmitted information
through the array is maximized for non-zero noise. Such a
phenomenon where noise can be of benefit in a nonlinear system is
termed Stochastic Resonance (SR). The effect in the array of
threshold devices was termed Suprathreshold Stochastic Resonance
(SSR) to distinguish it from conventional forms of SR, in which
usually a signal needs to be subthreshold for the effect to occur.
In this paper we investigate the efficiency of the analog to
digital conversion when the system acts like an array of simple neurons, by analyzing the average distortion incurred and comparing this distortion to that of a conventional flash ADC.
Proc. SPIE. 5114, Noise in Complex Systems and Stochastic Dynamics
KEYWORDS: Signal to noise ratio, Binary data, Interference (communication), Stochastic processes, Data processing, Complex systems, Systems modeling, Signal processing, Information theory, Signal detection
The data processing inequality of information theory states that given random variables X, Y and Z which form a Markov chain in the order X-->Y-->Z, then the mutual information between X and Y is greater than or equal to the mutual information between X and Z. That is I(X) >= I(X;Z) . In practice, this means that no more information can be obtained out of a set of data then was there to begin with, or in other words, there is a bound on how much can be accomplished with signal processing. However, in the field of stochastic resonance, it has been reported that a signal to noise ratio gain can occur in some nonlinear systems due to the addition of noise. Such an observation appears to contradict the data processing inequality. In this paper, we investigate this question by using an example model system.
Consider an array of parallel comparators (threshold devices) receiving the same input signal, but subject to independent noise, where the output from each device is summed to give an overall output. Such an array is a good model of a number of nonlinear systems including flash analogue to digital converters, sonar arrays and parallel neurons. Recently, this system was analysed by Stocks in terms of information theory, who showed that under certain conditions the transmitted information through the array is maximised for non-zero noise. This phenomenon was termed Suprathreshold Stochastic Resonance (SSR). In this paper we give further results related to the maximisation of the transmitted information in this system.
For an array of N summing comparators, each with the same internal noise, how should the set of thresholds, (theta) i, be arranged to maximize the information at the output, given the input signal, x, has an arbitrary probability density, P(x)? This problem is easy to solve when there is no internal noise. In this case, the transmitted information is equal to the entropy of the output signal, y. For N comparators there are N+1 possible output states and hence y can take on N+1 values. The transmitted information is maximized when all output states have the same probability of occupation, that is, 1/(N+1). In this paper we address some preliminary considerations relating to the maximization of the transmitted information I = H(y) - H(y|x) when there is finite internal noise.