A quantitative analysis is provided of how specific architectural features of image processors enhance the over-all algorithmic capacity of the corresponding vision system. Such features are discussed at a more detailed level than common SISD and MISD architectural categorizations. Metrics are presented to allow comparisons to be made between fundamentally different architectures.
An architecture for image processing based on a "pixel-kernel-processor" approach is described in detail. The pixels are processed in raster scan order, and it is shown that many of the complexity and data communication problems of cellular-logic-arrays are avoided. By extensive use of pipelining and bit-sequential arithmetic, it is shown that a processor device is readily feasible in current NMOS or CMOS technology. The structure is flexible in that devices may be paralleled up to increase both kernel size and performance, and video throughput rates are readily attainable.
An architecture and alforithms for a VLSI computer for back-projection image reconstruction are described. The computer consists of multiple identical back-projection processors connected in a linear array. Image pixels are pumped through the processor array, collecting at each processor a contribution to the image from one of its projections. Given one back-projection processor for each image projection, the entire reconstruction can be performed in a time comparable to that needed for sequential access of all image pixels. Implementation of a MOS VLSI back-projection processor is well advanced with working designs obtained for most processor subsystems. The processor incorporates a linear interpolator to estimate values between projection samples and accommodates non-linearity in the geometrical relationship between an image and its projection.
An architecture is proposed that is tailored to support radix-2K based in-place processing of pictorial data. The algorithms make use of signal-flow graphs to devise 2-dimensional operations suitable for image processing. Major advantages of the in-place processing scheme are reduced complexity and the possibility of parallel operations for which processors need access to disjoint sections of memory only. Due to an appropriate decomposition of the picture no queuing on common busses occurs. The underlying algorithms are discussed first. They include global as well as local 2-D operations. toring of pictorial data in mem-ory can be either in linewise or in hierarchical order. The supporting hardware may employ up to 2P processors (p=1,2,...,2n-1) for SIMD processing of 2nx2n = N2 pictures.
A hardware design of a percentile filter for images is discussed. Filtering is based on a binary histogram search which is a fast algorithm, suitable for a flexible hardware implementation. The data acquisition is such that window size, window shape and image boundary conditions can easily be adapted. A prototype has been built on two boards in a VME-bus environment.
Cellular-logic (or morphological) operations such as erosion, dilation, contour extraction, propagation, skeletonization, local majority voting, and pepper-and-salt noise removal are essential tools in processing and measuring binary images. This paper describes the design and implementation of a Cellular Logic Processor module for use in VME-bus oriented image-processing systems. The Cellular-Logic Operators are implemented in a general way by table look-up, while using specialized hardware for address-generation, neighbourhood updating and bit-plane combination. The on-board memory accomodates four binary images (bitplanes) of 256 x 256 pixels each. The processor works at a 7.2 MHz pixel rate, performing a logical combination of two bitplanes, followed by a cellular-logic operation in a total of 9.2 ms.
Among all the numerous parallel structures that have been studied for and involved in Image Processing,this article gives some elements for an answer to the problem of the choice of some architectures dedicated to Image Processing. It is based upon a theoretical study and simulation results.
Standard Image Processing and Computer Graphics hardware and software seem inadequate to analyze and display the 3-d information of discrete volume data efficiently. Instant display is indispensable as a diagnostic tool and for the interactive development of automated analysis methods. A fast direct display algorithm has been defined and successfully implemented on available 2-d Image Processing equipment; a dedicated hardware realization should provide real-time display capabilities.
Because it is nearly impossible with todays technology to make a processor which is optimised for all possible image processing tasks one tries to realize different processors each of them being a compromise between technology, processing speed and class of operations that must be performed optimaly. This paper describes a hardware module to be part of a general image computer that is being developed in our laboratory. The design of this single board module was started to provide us with a flexible and high speed processor reprogrammable for several classes of image operations. To some extend this processor will be useful in hierarchical structures which seem to be very promising concepts at our days.
A one-dimensional systolic geometry processor (SGP) which is useful in image processing and pattern recognition is described. The geometry processor can be used to enhance processing speed and throughput of the host computer by working as an attached processor. SGP algorithms for solving several primitive geometric problems such as convex hull, inclusion and intersection are shown.
It is noticeable that in very recent years, a paradigm shift is taking place in Artificial Intelligence, particularly in areas related to artificial vision. This interest, although finding its roots in computing archives under the heading of "neural networks" is undergoing a revival due to the possibility of making such machines run in real-time through special purpose architectures. In the U.S.A. this work takes the guise of "Boltzmann Machines" or Connectionist Systems which, as yet, are being studied in theory and by simulation. This paper describes work done in the UK which, so far has reached an implementation both as a prototype and as a fully engineered product. The neural-net-based architecture of the WISARD system will be discussed in the context of adaptive window applications.
The idea of a co-processor to process distributed or marked local image data is described. The architecture, format of instruction and time effects for some algorithms are pointed out. Time effects for image processing by the co-processor can be obtained from 8 to 60 times faster than for the sequential computing by a single microprocessor. The application of the microprocessor - image co-processor unit makes one s way towards industrial picture processing and analysis of binary images, grey-level images, two types of images between themselves and moving pictures as well.
A pyramidally structured multiprocessor architecture for image processing is presented together with its different operating modes (SIMD and multi-SIMD). The main problems addressed in this paper are: the image input-output system via an active memory, the global and pyramid control unit, the programming environment and, naturally, the present state of the project.
We present an algorithm for texture characterization based upon curvilinear integration of grey tone signal along some predefined directions.In the context of image segmentation, we compare the performances of this very simple technique with two other ones : texture features by second-order cooccurrence probabilities, and texture features by local one dimensional histograms. Good classification performances are obtained on quite different pictures.
In the domain of digital processing of textures orthogonal transforms are applicable to the characterization of structural properties and to texture discrimination and synthesis. Two groups of approaches based on orthogonal transforms, namely power spectral methods and methods using correlation masks, are considered and compared. The latter propose to match orthogonal basis vectors with the image structure. The degree of match expresses itself in the variance found at the output of the correlation masks for a limited image region. It is one of the aims of this paper to show the close correspondence between the two groups of methods. Common aspects comprise the use of orthogonal transforms, of windows of certain sizes and of quadrature and averaging in the process of feature extraction. The main difference with respect to attainable resolution and representation of structural detail is due to the recommended dimensions of the windows which has consequences for the interpretation of the features as well as for the domain of applicability. The power spectral methods produce features which are appropriate to characterize textures which humans tend to qualify as periodic, striped, grainy etc. The correlation methods are not tuned to extract all of these features but, due to the smaller primary window size, are able to find texture boundaries. The good classification results obtained with the use of the correlation masks suggests, because of the close relationship between the two concepts, a reappraisal of the suitability of spectral methods for texture analysis problems. On the basis of the comparison it is tried to further a deeper understanding of spectral analysis and to show how to better apply it for texture classification.
Textural analysis is now a commonly used technique in digital image processing. In this paper, we present an application of textural analysis to high resolution SPOT satellite images. The purpose of the methodology is to improve classification results, i.e. image segmentation in remote sensing. Remote sensing techniques, based on high resolution satellite data offer good perspectives for the cartography of littoral environment. Textural information contained in the pan-chromatic channel of ten meters resolution is introduced in order to separate different types of structures. The technique we used is based on statistical pattern recognition models and operates in two steps. A first step, features extraction, is derived by using a stepwise algorithm. Segmentation is then performed by cluster analysis using these extracted. features. The texture features are computed over the immediate neighborhood of the pixel using two methods : the cooccurence matrices method and the grey level difference statistics method. Image segmentation based only on texture features is then performed by pixel classification and finally discussed. In a future paper, we intend to compare the results with aerial data in view of the management of the littoral resources.
We describe the extension of the concepts of size distribution measurement from linear to non linear filtering and from binary to greyvalue image processing. A size-height transform was developed in order to make a spectrum with the size of objects and their contrast relative to the background as ordinate parameters. Also a size-height filter was made in order to filter images with parameter values extracted from the spectrum.
The RATS algorithm is a gray level threshold selection method which uses simple image statistics consisting of sums of products of gray level and gradient information accummulated over appropriate image areas. The method calculates a threshold in a very direct manner and avoids the often unreliable analysis of a gray level histogram. The contribution of individual pixels are largely local and independent and the method is therefore particularly well suited for implementation on a parallel hierarchic processor array such as a quadtree pyramid. Simple control strategies can be devised to select the most appropriate thresholds for individual pixels from thresholds at a series of different spatial scales. In this paper we consider how a SIMD processor array can be used to efficiently implement the RATS algorithm.
In this paper we describe an improved adaptative filtering algorithm based on local statistics using a sequential motion detection by iconic matching. Motivated by the real-time image processing, we have also investigated in a search strategy based on motion orientation prediction in order to reduce the iconic matching computational cost.
Over many years studies have been carried out at British Aerospace (B.Ae.) into what can be learned from knowledge of the early processing of the human visual system concerning the optimum detection of 'wanted' information in noisy and structured scenes. Several previous papers and reports have shown how use of such knowledge can provide a powerful and robust image processor but in general the reports have been concerned with single 'snapshot' images. The purpose of this paper is to show how spatially interactive processes may very simply be transformed into the time domain, yielding direct and convenient methods of sensing and analysing local optical flow.
This paper deals with matching corners extracted from each frame of an image sequence. The correspondence problem is solved by detecting gray level corners in frames since they are considered as good candidates for establishing correspondence over man-made objects for motion estimation and their dynamic recognition.
During the past decade, three major categories of image matching algorithms have emerged: Signal-processing-based, artificial-intelligence-based, and a combination of these methods called hybrid techniques. This paper summarizes some of these techniques and their potential in remote sensing applications.
A real-time parallel processor named PETAL is described which has been developed to extract cartoon primitives from grey-level television images. It is based on a cascaded look-up table architecture and is controlled by a 68000 microcomputer. It can process 256x256 images at 50 frames/s.
A detector for low-contrast blemishes on objects with high foreground-background contrast is described. An edge detector is followed by logical shifting and expansion operations in order to locate occurrences of spatial proximity of the edge orientations and polarities expected. Maximum and minimum sizes of defect detected are adjustable.
A generalized syntactic pattern recognition method is presented and applied to the interpretation of binary images containing arbitrary line structures, e.g. documents with graphics like technical drawings or road maps. The grammar is based on a set of robust image primitives as terminal elements. These primitive elements include their spatial relationship and are represented as a relational database. The primitive extraction uses (Euklidean) distance transforms, thinning and label propagation methods. The model description uses an attributed grammar with generalized production rules which are formulated by predicate calculus. This formulation allows straightforward implementation in PROLOG. Symbol recognition is thus reduced to an instantiation process using the built-in inference mechanism of PROLOG. The efficiency of primitive extraction, relational description and symbol recognition is demonstrated in technical drawings containing several line styles (e.g. dashed lines).
The diagnostic evaluation of biomedical imagery by computer presents a massive data processing problem that may be effectively handled by multiprocessor computer systems such as the Heidelberg Polyp. A hardware and software configuration of the Polyp is described that can run as a data-driven system directed by a knowledge data base for efficient image analysis.
The scoring of structural chromosome aberrations in peripheral human blood lymphocytes can be used in biological dosimetry to estimate the radiation dose which an individual has received. Especially the dicentric chromosome is a rather specific indicator for an exposure to ionizing radiation. For statistical reasons, in the low dose range a great number of cells must be analysed, which is a very tedious task. The resulting high cost of a biological dose estimation limits the application of this method to cases of suspected irradiation for which physical dosimetry is not possible or not sufficient. Therefore an automated system has been designed to do the major part of the routine work. It uses a standard light microscope with motorized scanning stage, a Plumbicon TV-camera, a real-time hardware preprocessor, a binary and a grey level image buffer system. All computations are performed by a very powerful multi-microprocessor-system (POLYP) based on a MIMD-architecture. The task of the automated system can be split in finding the metaphases (see Figure 1) at low microscope magnification and scoring dicentrics at high magnification. The metaphase finding part has been completed and is now in routine use giving good results. The dicentric scoring part is still under development.
Attempts to automate zooplankton measurement dates back to . More recently, image processing and feature extraction techniques have been used as part of a pattern recognition process implemented with multicomputer systems. This paper examines the potential of a circular sampling technique that may replace the pattern recognition process with one processing step and one comparison step. Circular sampling is approximated by rearranging pixels to form a one dimensional sequence and is used for comparison with base images. The properties of these sequences are examined, using images from six major zooplankton categories, and their classification accuracy is investigated.
This paper describes an algorithm, which computes synthetical radiographs from the characteristics of the observed object, of the used X-ray machine and of the recording technique. Some examples of such images synthesis are then presented.
Synthetic image generator finds a particulary interesting application in simulation. It can be used to assess new optronic systems which can be entirely modelised or even better included in the simulation loop. This paper describes such an hybrid simulator. It also shows that an infrared version of a visible image generator can be obtained without major changes. Finally, the description of some possible testing-bench implementations for optronic devices opens a large field of investigation.