A novel approach to photonic A/D conversion using a distributed neural network, oversampling techniques, and a smart pixel hardware implementation is described. In this approach, the input signal is first sampled at a rate higher than that required by the Nyquist criterion and then presented spatially as the input to a 2D error diffusion neural network consisting of M X N neurons, each representing a pixel in the image space. The neural network processes the input oversampled analog image and produces an M X N pixel binary or halftoned output image. Decimation and low-pass filtering techniques, common to classical 1D oversampling A/D converters, digitally sum and average the M X N pixel output binary image using high-speed digital electric circuitry. By employing a 2D smart pixel neural approach to oversampling A/D conversion, each pixel constitutes a simple oversampling modulator thereby producing a distributed A/D architecture. Spectral noise shaping across the array diffuses quantization error thereby improving overall signal-to-noise ratio performance. Here, each quantizer within the network is embedded in a fully- connected, distributed mesh feedback loop which spectrally shapes the overall quantization noise thereby significantly reducing the effects of components mismatch typically associated with parallel or channelized A/D approaches. This 2D neural approach provides higher aggregate bit rates which can extend the useful bandwidth of photonic-based, oversampling A/D converters.