The many applications of image processing developed during the last 20 years depend upon a series of fundamental concepts and operations. We discuss a number of them here; the formation of an image through convolution, transform coding and its basis functions, image degradation through spatial frequency alterations and the related resolution measurements, sampling, interpolation, and some related processing. To bring a historical flavor, the images used for illustration have been chosen from the 1960's.
During the past decade, three major categories of image matching algorithms have emerged: Signal-processing-based, artificial-intelligence-based, and a combination of these methods called hybrid techniques. This paper summarizes some of these techniques and their potential in remote sensing applications.
The types of input data, the types and purposes of processing, and the classes of 3-D display are briefly reviewed, together with some applied examples. A critical overview of some of the more interesting 3-D display candidates for a high through-put digital workstation is given, and the attributes of various stereoscopic display systems are highlighted. The impact of certain display/extraction requirements on processing speed and power are shown and some new developments in 3-D displays are indicated.
A survey of several adaptive techniques for image enhancement and filtering is presented under a com-mon framework. With the proliferation high-speed processors, such as array and image display processors, the increased computational power has shifted emphasis in image processing away from "global" to "local" techniques. These local techniques frequently use sliding windows which compute local properties of the image. These properties, in turn, are used to locally control the enhancement or filter applied to the image. The types of operations discussed encompass contrast, edge or information enhancment, noise or artifact reduction, and feature extraction or removal. Algorithmic implementations include real-time con-trast enhancement filters, zonal filters, short-space FFT filters, and a multi-dimensional adaptive least-squares technique. In this review paper, an adaptive frame-work, discussions on various enhancement approaches, and an extensive bibliography are presented.
The presence of signal-dependent image noise sources commonly leads to the use of nonlinear restoration techniques. This paper reviews the use of signal-dependent noise models such as film grain noise and photoelectronic shot noise and describes some of the nonlinear estimators which have been developed for these situations. The usefulness of signal-dependent noise in recovering signal information is also discussed. The results of computer simulations on various test images are presented.
The parallel processing, high-speed, compact system fabrication possibility, low power dissipation and size, plus weight advantages of optical processors have achieved great strides in recent years. The architectures, algorithms and system fabrication of hybrid pattern recognition processors are reviewed with attention and emphasis to recent results and to techniques appropriate for distortion-invariant multi-class pattern recognition applications.
Three relationships between digitally controlled holograms and computer holograms can be distinguished. The holograms can display digitally generated data in 3D or even 4D (3D projections). The holograms can be used directly in optical systems to improve or recognize images. Finally digital analysis of optical holograms is analyzed.
We examine and attempt to classify presently available image processing software products. Next, we suggest that the advent of extremely inexpensive image processing hardware puts new demands upon image processing software. These include ease of use, ease of adaptation to new tasks and transportability. Finally, we illustrate the discussion by presenting in some detail a system whose novelty is an unusually low cost, combined with surprising performance, ease of use and versatility.
Today's high speed computers, large and inexpensive memory devices and high definition displays have opened up the area of electronic image processing. Computers are being used to compress,enhance,and geometrically correct a wide range of image related data. It is necessary to develop Image Quality Merit Factors (IOW) that can be used to evaluate, compare, and specify imaging systems. A meaningful IQMF will have to include both the effects of the transfer function of the system and the noise introduced by the system. Most of the methods used to date have utilized linear system techniques to describe performance. In our work on the IOMF, we have found that it may be necessary to imitate the eye-brain combination in order to best describe the performance of an imaging system. This paper presents the idea that understanding the organization of and the rivalry between visual mechanisms may lead to new ways of considering photographic and electronic system image quality and the loss in image quality due to grain, halftones, and pixel noise.
An image processing architecture is described that provides isolation of external interfaces from image processing applications software modules. The architecture allows development of a variety of specialized user interfaces, each of which utilizes the same basic set of image processing software routines. The architecture also allows development of interfaces to external non-imaging data bases, and other software (such as commercially available Data Base Management Systems) without requiring modification of the image processing software routines. Examples of a user interface developed on a personal workstation and of the correlation of weather satellite imagery with georeferenced data bases are presented.
This paper presents an overview of efforts at Hughes sponsored by DARPA and ONR to demonstrate the application of research results from the DARPA Image Understanding Project. This effort has led to the development of a model based, predictive paradigm for image understanding, based on the ACRONYM system concept developed at Stanford University by Rod Brooks and Tom Binford. The completed initial system and a second system in development provide a capability for automated photo-interpretation by identifying instances of modeled objects in imagery. These implementations exercise elements of 3-D modeling, object feature prediction (including illumination and shadows), model directed feature extraction, and situation assessment via temporal analysis. The current system in development reflects the knowledge and experience gained in developing the initial system, where roadblocks to the development of the predictive vision paradigm were identified. A substantially new approach was taken in several areas and a new kernel of support tools was developed. The new system provides more robust performance and is built to support further extensions and testing.
Current digital signal processing methods used for formation of images from Synthetic Aperture Radar (SAR) are briefly reviewed. Approaches for dealing with the complication of range migration for conventional stripmap mode SAR systems operating at long range are included. In addition, algorithms applicable to Spotlight Mode SAR are presented as a special case to a general formulation of three-dimensional imaging of rotating objects. This latter approach would be particularly useful for applications designed to exploit look angle diversity or to operate in a squinted model. The Spotlight algorithm inherently compensates for range migration effects and can be efficiently applied to stripmap configurations given certain system characteristics.
Image processing algorithms are being applied to high resolution digitally formatted radiographic images. The implementation of digital radiography systems within radiology departments will require high resolution interactive display systems. These interactive diagnostic display stations will be tied into high speed local area networks.
This review paper discusses various image bandwidth compression techniques developed in the recent years. The article emphasizes the adaptive techniques and the techniaues that have had a multitude of applications due to superior performance of the former and the readers interest in the latter group. The nature of the discussion is heuristic to bring out the salient features of various techniques at the same time giving the readers a working knowledge of each technique and ideas on how to modify and apply to a particular application.
An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.
The concept of videoconferencing is not new or even recent. As early as 1930 a two-way videotelephone system  between Bell Labs and AT&T headquarters in New York city was de-monstrated. After a lull, came the Picturephone  idea. The introduction was a big ac-commplishment for the researchers [3,4] and excitement for the public. For some reason or the other it did not come into the limelight in the market.
Over the last twenty years a variety of pattern recognition techniques for classifying terrain and cultural features using multi-spectral imagery have been developed. The purpose of this paper is to review and assess representative methods from major technique classes categorized according to the kinds of pattern models used (statistical, or heuristic), the types of information used (spectral, textural, spatial, and contextual), the manner in which they are applied to the image (i.e., to pixels or regions), and the manner in which they partition the image into classes (e.g., single step or hierarchical). An assessment of the accuracy, computational efficiency, and reliability is performed and trends in the technology are identified.
Image processing technology concentrates on the development of data extraction techniques applied toward the statistical classification of visual imagery. In classical image processing systems, an image is  preprocessed to remove noise,  segmented to produce close object boundaries,  analyzed to extract a representative feature vector, and  compared to ideal object feature vectors by a classifier to determine the nearest object classification and its associated confidence level. This type of processing attempts to formulate a two-dimensional interpretation of three-dimensional scenes using local statistical analysis, an entirely numerical process. Symbolic information dealing with contextual relationships, object attributes, and physical constraints is ignored in such an approach. This paper describes a number of artificial intelligence techniques which allow symbolic information to be exploited in conjunction with numerical data to improve object classification performance.
Phase retrieval implies extraction of the unknown phase of a complex signal from modulus data. It has been proposed for use in a variety of applications, including wavefront sensing, electron microscopy, signal design, and reconstruction of atmospherically degraded images. Solutions to the problem involve measuring the modulus in one or two conjugate domains; using an apodized aperture; using several, slightly defocused images; and forming an estimate of the signal's power spectrum based on multiple, degraded measurements. In this review paper we define the problem, show how phase retrieval can be used in some of the areas mentioned above, outline various solutions that have been proposed, give several examples, and present a fairly extensive bibliography on the subject.
The field of robot vision is growing rapidly and promises to provide a great improvement in the versatility of industrial robots. The purpose of this paper is to review several techniques commonly used in robot vision applications, describe some of the solved problems, and to present examples of the various applications of these robot vision techniques. A basic premise of this paper is that the gap between C AD and C AM can be decreased using automatic inspection techniques. The robot vision applications are expanding at an unprecedented rate. This paper is a review and an attempt to tie together the diverse technologies available. The application of robot vision in the integrated manufacturing process may be vital to improvements in productivity and product quality.
Based on the literature as well as on theoretical knowledge and practical experience, the state of art of visual inspection is reviewed. First, the algorithmic and system aspects of this field are discussed. Second, a recent literature search for applications of visual inspection is briefly reviewed. Third, some future trends, expected advances and improvements needed to develop advanced vision systems are listed. Finally, conclusions are drawn from the previous discussion and the authors give their personal view of the actual situation and the future of visual inspection.