Pooling networks of noisy threshold devices are good models for natural networks (e.g. neural networks in some
parts of sensory pathways in vertebrates, networks of mossy fibers in the hippothalamus, . . . ) as well as for
artificial networks (e.g. digital beamformers for sonar arrays, flash analog-to-digital converters, rate-constrained
distributed sensor networks, . . . ). Such pooling networks exhibit the curious effect of suprathreshold stochastic
resonance, which means that an optimal stochastic control of the network exists.
Recently, some progress has been made in understanding pooling networks of identical, but independently
noisy, threshold devices. One aspect concerns the behavior of information processing in the asymptotic limit of
large networks, which is a limit of high relevance for neuroscience applications. The mutual information between
the input and the output of the network has been evaluated, and its extremization has been performed. The
aim of the present work is to extend these asymptotic results to study the more general case when the threshold
values are no longer identical. In this situation, the values of thresholds can be described by a density, rather
than by exact locations. We present a derivation of Shannon's mutual information between the input and output
of these networks. The result is an approximation that relies a weak version of the law of large numbers, and a
version of the central limit theorem. Optimization of the mutual information is then discussed.