A multiframe selective information fusion technique derived from robust error estimation theory has versatile image processing applications, including those for multispectral information fusion and for obtaining a wide-angle focused synthetic frame integrating small angle focused regions in distinct video-frames. An example combining both applications is presented in the processing of a color video of a scene imaged through turbulent air. The three multispectral Red-Green-Blue components of each color image is first transformed into gray-scale images that are fused as independent video streams to achieve three focused wide-angle fields of view. The multispectral fusion of the resulting synthetic RGB frames, using the same multiframe information fusion technique, combines all significant details from the three multispectral synthetic frame into a single synthetic output frame.
Proc. SPIE. 1385, Optics, Illumination, and Image Sensing for Machine Vision V
KEYWORDS: Signal to noise ratio, Optical transfer functions, Imaging systems, Image acquisition, Image resolution, Interference (communication), Imaging devices, Digital imaging, Signal processing, Machine vision
In a digital imaging device the maximum resolution that can be achieved is limited to the Nyquist frequency of the sampling grid. We develop in this paper a method by which this constraint may be overcome by recovering aliased information which occurs due to undersampling and thereby restore frequencies beyond the sampling passband. The method relies on acquiring several images of the same scene by varying the optical transfer function of the imaging system. We then solve a set of linear equations that incorporates the degradations due to blurring aliasing and noise of the imaging system. The effectiveness of the technique is demonstrated by presenting restorations of 1-dimensional and 2-dimensional degraded signals. We also discuss the usefulness of the technique for multiresolution coding.
The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.
In this paper we are concerned with the end-to-end performance of image gathering, coding, and restoration as a whole rather than as a chain of independent tasks. Our approach evolves from the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information theoretical assessment accounts for the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and for the information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. The redundancy reduction is concerned mostly with the statistical properties of the acquired signal, and the irrelevancy reduction is concerned mostly with the visual properties of the scene and the restored image. The results of this assessment lead to intuitively appealing insights about image gathering and coding for digital restoration. Foremost is the realization that images can be restored with better quality and from less data as the information efficiency of the transmitted data is increased, providing that the restoration correctly accounts for the image gathering and coding processes and effectively suppresses the image-display degradations. High information efficiency, in turn, can be attained only by minimizing imagegathering degradations as well as signal redundancy. Another important realization is that the critical constraints imposed on both image gathering and natural vision limit the maximum acquired information density to ~ 4 binary information units (bifs). This information density requires ~ 5-bit encoding for transmission and recording when lateral inhibition is used to compress the dynamic range of the signal (irrelevancy reduction). This number of encoding levels is close (perhaps fortuitously) to the upper limit of the ~ 40 intensity levels that each nerve fiber can transmit, via pulses, from the retina to the visual cortex within ~l/20 sec to avoid prolonging reaction times. If the data are digitally restored as an image on film for ‘best’ visual quality, then the information density may often be reduced to ~3 bifs or even less, depending on the scene, without incurring perceptual degradations because of the practical limitations that are imposed on the restoration. These limitations are not likely to be found in the nervous system of human beings, so that the higher information density of ^4 bifs that the eye can acquire probably contributes effectively to the improvement in visual quality that we always experience when we view a scene directly rather than through the media of image gathering and restoration.