Current digital imaging devices often enable the user to capture still
frames at a high spatial resolution, or a short video clip at a lower spatial resolution. With bandwidth limitations inherent to any sensor, there is clearly a tradeoff between spatial and temporal sampling rates, which can be studied, and which present-day sensors do not exploit. The fixed sampling rate that is normally used does not capture the scene according to its temporal and spatial content and artifacts such as aliasing and motion blur appear. Moreover, the available bandwidth on the camera transmission or memory is not optimally utilized. In this paper we outline a framework for an adaptive sensor where the spatial and temporal sampling rates are adapted to the scene. The sensor is adjusted to capture the scene with respect to its content. In the adaptation process, the spatial and temporal content of the video sequence are measured to evaluate the required sampling rate. We propose a robust, computationally
inexpensive, content measure that works in the spatio-temporal
domain as opposed to the traditional frequency domain methods. We
show that the measure is accurate and robust in the presence of noise and aliasing. The varying sampling rate stream captures the scene more efficiently and with fewer artifacts such that in a post-processing step an enhanced resolution sequence can be effectively composed or an overall lower bandwidth for the capture of the scene can be realized, with small distortion.