Computation in artificial perceptual systems assumes that appropriate and reliable sensory information about the environment is available. However, today's sensors cannot guarantee optimal information at all times. For example, when an image from a CCD camera saturates, the entire vision system fails regardless of how 'algorithmically' sophisticated it is. The principal goal of sensory computing is to extract useful information about the environment from 'imperfect' sensors. This paper attempts to generalize our experience with smart vision sensors and provide a direction and illustration for exploiting complex spatio-temporal interaction of image formation, signal detectors, and on-chip processing to extract a surprising amount of useful information from on-chip systems. The examples presented include: VLSI sensory computing systems for adaptive imaging, ultra fast feature tracking with attention, and ultra fast range imaging. Using these examples, we illustrate how sensory computing can extract unique, rich and otherwise not obtainable sensory information when an appropriate balance is maintained between sensing modality, algorithms and available technology.