In this paper we present a new method of compressing and decompressing video streams directly on video
sensors and display panels, without the use of on-board or independent DSP processors. This eliminates
compression/decompression processing delays, minimizes power use, reduces the physical size of the
system, and significantly reduces the bandwidth required for the live transmission of images. The method
enhances the capabilities of small size surveillance systems, UAVs as well as battery powered mobile
entertainment equipment (digital cameras, iPODs, camcorders, camera cell phones...). To achieve these
goals we make structural changes to the camera and display sub-system architectures so they can directly
process the compressed streams. In particular, we geometrically distribute the light detection and
generation elements to correspond to the DCT and ICT coefficients of the JPEG/MPEG algorithm. This
eliminates many system redundancies which in turn deliver the aforementioned performance improvements.
The need for high (wide) dynamic range cameras in the Security and Defense sectors is self-evident. Still the development of a cost-effective and viable system proves to be an elusive goal. To this end we take a new approach which meets a number of requirements, most notably a high "fill" factor for the associated APS (active pixel sensor) array and a minimal technology development curve. The approach can be used with any sensor array technology supporting, on a granular level, random pixel access. To achieve high dynamic range one of the presented camera systems classifies image pixels according to their probable brightness levels. Then it scans the pixels according to their probable brightness, with the pixels most likely to be the brightest being scanned first and the pixels most likely to be the darkest, last. Periodically the system re-adjusts the scanning strategy based on collected data or operator inputs. The overall exposure time is dictated by the sensitivity of the selected array and by the content and frame rate of the image. The local exposure time is determined by the predicted pixel brightness levels. The prediction method we use in this paper is simple duplication; i.e. the brightness of the vast majority of pixels is assumed to change little from frame to frame. This allows us to dedicate resources only to the few pixels undergoing large output excursions. Such approach was found to require only minimal modifications to standard APS array architectures and less "off-sensor" resources than CAMs (Content Addressable Memory) or other DSP intensive methods.