Cochlear implants are prosthetic devices used to provide hearing to people who would otherwise be profoundly deaf.
The deliberate addition of noise to the electrode signals could increase the amount of information transmitted, but
standard cochlear implants do not replicate the noise characteristic of normal hearing because if noise is added in an
uncontrolled manner with a limited number of electrodes then it will almost certainly lead to worse performance. Only if
partially independent stochastic activity can be achieved in each nerve fibre can mechanisms like suprathreshold
stochastic resonance be effective.
We are investigating the use of stochastic beamforming to achieve greater independence. The strategy involves
presenting each electrode with a linear combination of independent Gaussian noise sources. Because the cochlea is filled
with conductive salt solutions, the noise currents from the electrodes interact and the effective stimulus for each nerve
fibre will therefore be a different weighted sum of the noise sources. To some extent therefore, the effective stimulus for
a nerve fibre will be independent of the effective stimulus of neighbouring fibres.
For a particular patient, the electrode position and the amount of current spread are fixed. The objective is therefore to
find the linear combination of noise sources that leads to the greatest independence between nerve discharges. In this
theoretical study we show that it is possible to get one independent point of excitation (one null) for each electrode and
that stochastic beamforming can greatly decrease the correlation between the noise exciting different regions of the
We have investigated how optimal coding for neural systems changes with the time available for decoding.
Optimization was in terms of maximizing information transmission. We have estimated the parameters for
Poisson neurons that optimize Shannon transinformation with the assumption of rate coding. We observed a
hierarchy of phase transitions from binary coding, for small decoding times, toward discrete (M-ary) coding
with two, three and more quantization levels for larger decoding times. We postulate that the presence of
subpopulations with specific neural characteristics could be a signiture of an optimal population coding scheme
and we use the mammalian auditory system as an example.
We have investigated information transmission in an array of threshold units with multiplicative noise that have a common input signal. We demonstrate a phenomenon similar to stochastic resonance with additive noise, and show that information transmission can be enhanced by a non-zero multiplicative noise level. Given that sensory neurons in the nervous system have multiplicative as well as additive noise sources, and they act approximately like threshold units, our results suggest that multiplicative noise might be an essential part of neural coding.
The problem of estimating periodic properties of periodically non-stationary stochastic processes is studied. A recently introduced
measure, the measure of periodicity (MP), of stochastic oscillations is discussed. The MP estimates the "periodicity level" of the
oscillations, i.e. the ratio of the periodic to the non-periodic components of the stochastic processes. The introduced measure differs fundamentally from the traditional measure, SNR, because the MP lets us estimate the value of the oscillation period. The MP is particularly useful in systems that display stochastic synchronisation phenomenon where the ratio of the periods of the
external force and the response of the studied system is m:n, where m and n are positive integer numbers. The MP is used to study synchronisation in two different systems, a bistable system and a neuronal model driven by noise and a sinusoidal signal. The dependence of MP on parameters is compared with the behaviour of the cross-correlation coefficient and the effective diffusion
coefficient. The influence of asymmetry in the bistable system is also studied. In the autonomous neuronal model it is shown that the coherence resonance phenomenon is well described by the MP.
We consider the application of Gaussian channel theory (GCT) to the problem of estimating the rate of information transmission through a nonlinear channel such as a neural element. We suggest that, contrary to popular belief, GCT can be applied to neural systems even when the dynamics are highly nonlinear. We show that, under suitable conditions, the Gaussianity of the response is not compromised and hence GCT can be usefully applied. Using the GCT approach we develop a new method for estimating information rates in the time domain. Finally, using this new method, we show that a recently introduced form of stochastic resonance, termed suprathreshold stochastic resonance, is also displayed by the information rate.