In this paper several algorithms for skeletonizing a three-dimensional (3-D) pictorial data are proposed with experimental results to provide ideas on what kinds of operations are required and what the computation time amounts to for processing 3-D images. Algorithms discussed here include shrinking, thinning, distance transformation and border following.
Many methods have been proposed to enhance and detect edges in gray-level images. Most of these are based on spatial differentiation to enchance gray-level changes, and have the common problem of being very sensitive to noise. In order to surpress the noise, some spa-tial averaging has been combined with the differentiation. This, however, tends to complicate the definition of the operator and also results in not very sharp edges. In this paper, we consider the characteristics of edges and the statistical properties of noise in gray-level histograms taken from local regions of an image. The noise can then be regarded as a blurring process on the gray-level histograms. Therefore, the problem of edge detection for noisy images can be reduced to the problem of invariant feature extraction under one-dimensional blurring. By applying a theory of blurring-invariant feature extraction, a new family of nonlinear edge detectors is derived. The resultant operators are simple, based on central moments of the gray levels within a local window, and definable independently of edge orientation in the window. The operators are quite insensitive to noise, and their effectiveness has been confirmed by experiments.
In this paper we briefly review the techniques used to solve the automatic target recognition (ATR) problem. Emphasis is placed on the algorithmic and implementation approaches. The evaluation of ATR algorithms such as target detection, segmentation, feature evaluation and classification are discussed in detail and several new quantitative criteria are suggested. The evaluation approach is discussed and various problems encountered in the evaluation of algorithms are addressed. Strategies to be used in the data base design are outlined. New techniques such as the use of semantic and structural information, hierarchical reasoning in the classification and incorporation of multisensor in the ATR systems are also presented.
Computer systems capable of processing digital data from high-resolution microscopic imagery at very high data rates in an economically feasible manner are needed for prescreening clinical samples from a well-population for the early onset of disease and for examining biologic monitoring systems for adverse environmental effects. In this paper a description is given of the application of a microprocessor-based multiprocessor system (the Heidelberg POLYP polyprocessor) to chromosome aberration scoring in biologic dosimetry and to the screening of monolayer cell preparations such as those prepared for cancer prescreening. The POLYP system is characterized by the grouping of processors under software control, by multiple data buses, and by a syncbus that dynamically adjusts priority scheduling among processors, thus eliminating, for all practical purposes, the usual bus bandwidth limitations.
A new transform of trinary vectors, called pseudo-Hadamard transform, is introduced. This transform is a one-to-one mapping in a trinary vector space and very similar to the Hadamard transform. To define the pseudo-Hadamard transform, the Good's formula for the fast Hadamard transform of N( = 2n ) dimension HN = T1T2...Tn is used, where the transform T. ( i = 1,2,...,n ) consists of N additions and subtractions. The pseudo-Hadamard transform'is defined by replacing the additions and the subtractions with kinds of trinary operations. This transform preserves some properties of the Hadamard transform and is also very easy to perform. As applications of the pseudo-Hadamard transform, examples of binary image processing are presented.
Lines in images, after digitization, binary conversion, and thinning, can be represented as piece-wise linear curves on a two-dimensional plane. The piece-wise linear plane curves may be converted to regularized curvature functions by the use of kernel functions. The method of kernel functions gives position, orientation, and if desired, length invariant representations of the piece-wise linear curves, and produces controllably smooth descrip-tions while preserving the overall shape of the original curve. The method is applicable to shape description for matching, identification and smooth reconstruction.
A software system to analyze the neuron images seen under light microscope is developed. The system is desiged to process a large number of specimens semi-automatically and to store the processed images with a small memory space. We took the simplest parameters of neuronal morphology, the soma and dendrites. Several methods of automatic soma classification are described.
Several useful algorithms in image processing involve the spreading of a certain pixel state over large areas and distances. Very often this is a kind of growth from a "seed" conditioned by a binary mask image so that the final states of labels form connected regions. This labeling operation may well be considered to be the archetype of the propagating operations we have in mind but there are several others, e.g. distance mapping in a plane with obstacles.1,2.
Multiple regression analysis for modeling the correspondence between a set of input variates and an output variate or a set of variates seems to be one of the most promising and direct approaches to automatically designing adaptive (or learning) systems for image pro-cessing and computer vision. Some approaches are shown with experimental results, such as automatic design of adaptive filters for image enhancement and restoration by giving the input image and the desired out-put image as a pair. The advantage of such an approach is the capability to simulate in an automatic and gen-eral way the functional "black boxes" (solutions) which are imposed by real problems regard-less of their inner detail, while the usual approaches are based on the so-called trial and error methods where any method proposed is repeatedly tried and checked for its results.
The article describes a system able to process gray-level image data of a high spatial resolution at a high speed. A large number of data reduction algorithms are implemented by means of a table driven architecture. This allows the use of the system in a broad range of automatic visual inspection problems and other areas such as robot vision.
Despite the large degrees of parallelism present in the structure of the data and operations, it is not clear what parallel architecture is best suited for a given image processing application. This paper proposes a model for the formulation of parallel image processing tasks, and it enables a determination of a 'high level' specification of an optimal parallel architecture corresponding to a given application. In addition, the problem of determining the requirements for real time processing is addressed.
In this paper we present a set of algorithms used to automatically detect, segment and classify tactical targets in FLIR (Forward Looking InfraRed) images. These algorithms are implemented in an Intelligent Automatic Target Cueing (IATC) system. Target localization and segmentation is carried out using an intelligent preprocessing step followed by relaxation or a modified double gate filter followed by difference operators. The techniques make use of range, intensity and edge density information. A set of robust features of the segmented targets is computed. These features are normalized and decorrelated. Feature selection is done using the Bhattacharrya measure. Classification techniques include a set of linear, quadratic classifiers, clustering algorithms, and an efficient K-nearest neighbor algorithm. Facilities exist to use structural information, to use feedback to obtain more refined boundaries of the targets and to adapt the cuer to the required mission. The IATC incorporating the above algorithms runs in an automatic mode. The results are shown on a FLIR data base consisting of 480, 512x512, 8 bit air-to-ground images.
This paper reviews the geneology of cellular logic computers starting with the Cell-scan system and extending through an interesting diversity of architectures to more recent machines such as the pipelined Cytocomputer, PICAP (Picture Array Processor), DIP-1 (Delft Image Processor), and the Coulter Electronics diff3. These machines have advanced in speed from an equivalent of 100,000 instructions per second in 1961 to one billion instructions per second at the present time.
A cellular logic operation traditionally consists of the parallel application of a local image transformation at all cells of a two dimensional array. In order to compute certain global image properties efficiently, it sometimes behooves one to extend the notion of an operation so that it operates on a hierarchical domain rather than just an image. In this paper, the definitions for hierarchical cellular logic operations are given and applied to the problem of selecting key pixels of a binary image. Such key pixels may be used as features themselves or used as seed points for object extraction algorithms.
Mathematical Morphology is an algebraic language of image processing based on set-theoretic constructs. The operations of mathematical morphology are easily implemented in cellular logic image processors like the CLIP or Cytocomputeet Drawbacks of arrays and pipelines serve as impetus for the design of several other cellular logic image processor architectures.
A design is presented for a Content Addressable Array Parallel Processor (CAAPP) which is both practical and feasible. Its practicality stems from an extensive program of research into real applications of content addressability and parallelism. The feasibility of the design stems from development under a set of conservative engineering constraints tied to limitations of VLSI technology. We then describe various procedures for image processing on the CAAPP. The first performs image convolutions very quickly. It is shown that this algorithm can be generalized to perform convolutions with increased mask size with only a moderate reduction in speed. The second uses the CAAPP to quickly and robustly decompose an optic flow field into its rotational and translational components to recover sensor motion parameters. We also briefly describe techniques for associating symbolic descriptions with extracted image structures in the CAAPP.
An interactive image processing system was set up to provide easy use of standard methods and their rapid execution. Point operations and linear and non-linear neighborhood operations were implemented on the display system for integer valued images and on the host for their floating point representation. Fourier domain processing was accelerated by using the refresh memories for auxiliary direct access storage and by computing the PFT with assembly coded routines. Topological operations for segmentation of binary images are done in the display system. Image classification with instant display of the results equally relies on the display hardware. A command processor parses input to verify and validate commands and to separate them from (optional) parameters specified by the user if given default values are not suitable. Usage and integration of image processing procedures is facilitated by maintaining all command language features of the host operating system. Command strings can be set up to repeat sequences of processing steps. A help facility serves to inform the less experienced user.
FAZYTAN has been designed and realized for systematic adaption to image evaluation problems, which can be characterized as classification tasks with the availability of a labeled training set of statistic relevant examples of the problem. The basic ideas of FAZYTAN are characterized by the cue words: - Processor oriented algorithms for digital image transformations (local neighborhood operations) and feature extraction (Minkowski measures) - Problem oriented systematic optimization procedures for image transformation and feature extraction steps - No restrictions to process large feature sets for difficult classification problems - High data throughput by application of TV-frame oriented subprocessors for image-transformation and feature extraction to attack voluminous classification tasks. Examples of applications of FAZYTAN in the fields of biologic cell analysis, object segmentation, texture analysis and satellite image analysis will be presented.
Image Processing has to rely greatly on the design of effective algorithms. Unfortunately this effectiveness is not seldom paid for by considerable computation times due to the innate complexity. Dedicated hardware in form of a Digital Video Processor (DVP) meets the case since it combines both versatility and speed. This will be illustrated through some typical applications emanating from various areas of Image Processing . KEYWORDS: DVP, nonlinear spatial filtering, topological operations, nonparametric classification
This paper acknowledges that digital image processing systems no longer are limited to pure research or prototype status, but rather are cost-effective, accurate and efficient enough to have become a category of quasi-standard instrumentation in the fields of diag-nostic medical imaging, visual inspection, remote sensing, geophysics, data encoding and decoding, and other fields. A new architectural concept incorporates the most recent ad-vances in microprocessor technology: high-speed, specialized processors, often with local intelligence; large-scale direct-mapped memory; and multiple-bus and multitasking capabilities within a stand-alone device. This architecture is described as an alternative to the traditional configurations in which digital image generators, CPU, memory, special processors and peripherals exist as "cable-length" devices. Furthermore, specialized image processing algorithms which have been developed as a direct response to specific require-ments in various fields of applications not only are highly compatible with this architecture, but have contributed significantly to its development.
Earley's algorithm has been commonly used for the parsing of general context-free languages and error-correcting parsing in syntactic pattern recognition. The time complexity for parsing is 0(n3). In this paper we present a parallel Earley's recognition algorithm in terms of "x*" operation. By restricting the input context-free grammar to be X-free, we are able to implement this parallel algorithm on a triangular shape VLSI array. This system has an efficient way of moving data to the right place at the right time. Simulation results show that this system can recognize a string with length n in 2n+1 system time. We also present an error-correcting recognition algorithm. The parallel error-correcting recognition algorithm has also been im-plemented on a triangular VLSI array. This array recognizes an erroneous string length n in time 2n+1 and gives the correct error count. Applications of the proposed VLSI architectures to image analysis are illus-trated by examples.
This paper describes a bus-oriented hardware architecture for the acquisition, processing and display of high-resolution two dimensional image data patterns. The system contains dedicated bipolar processors for image acquisition and display and a moderately coupled microprocessor system for image processing. Two separate asynchronous common buses are used to support high-speed data transfer and task synchronisation in the system. The design and implementation aspects of major system components, such as pixel-bus, interleaved memory modules, image processor unit and display processor are discussed in detail. Advanced design tools for modelling parallel processes, microprograming and pipelining were used throughout the design.
A hardware architecture for real-time image resampling has been developed which will support a wide range of image resampling tasks arising in remote sensing applications. In particular, local spatial sampling errors caused by misalignment of sensor arrays and optical defects, plus global spatial sampling errors caused by platform pointing errors, can be simultaneously rectified. This is achieved by referring all sample errors to the same fixed ideal sampling coordinate frame This has the added advantage of providing automatic co-registration of images obtained in several spectral bands with separate sensor arrays. The resulting architecture is modular and flexible due to a decomposition into independent parallel structures. The use of very large scale integrated circuits for memories and mul-tiplier/accumulators results in a design with a processing speed/power ratio in excess of 10° pixels per second per Watt and providing 1/16 pixel resampling accuracy.
Diagnostic radiology is adopting more and more digital technology. We have conceived a plan for structured growth in the use of this technology in our department, and outline in this paper the large scale architecture of a seed digital image processing facility, and its anticipated evolution over the next few years. In addition, we outline the requirements and discuss our problems, solutions, and future directions.
Many applications in visual inspection and in robotvision require a very fast interpretation and evaluation of images, so that industrial processes should not be slowed down. In this paper an image computer architecture is proposed, which is optimalized for two classes of typical image processing tasks, required in visual inspection and robotvision.* Time-filtering, motion analysis, stereo-vision, image compression, matching are examples of a first class of image processing tasks. They have in common that they require paral-lel processing of different images, so they need parallel accesses to them as source, reference or parameter matrix. To a second class of image processing tasks belongs feature extraction; in contrast with the preceding, this processing requires only one image. These algorithms often don't scan through the image in a predetermined way. They are often driven by the image data itself, where pixel data are fed back to compute the next addresses (e.g. connectivity analysis). Fast random access to the image memories and an efficient data-to-address feedback are needed then. The basic element of the system is a fast Image Bus (IBUS) which is a reflection of the needs of parallel access and random access processing : four differently programmable data channels, two address busses and data-to-address feedback capabilities. The system itself is modular to allow minimum configurations in a wide variety of industrial visual inspection tasks.
A general purpose, high-speed image processing system with a time shared multiframe data bus architecture and with multi-processors--MFIP--has been developed. Massive image data can be transferred from/to multiple memory modules to/from multi-processors through the high-speed time shared multiframe data bus (40 MW/sec). The system is built up, centering on 2MW large image memory consisting of eight memory modules with 256KW (1W = 16 bits). Image memory can be expanded up to 8MW, i.e., 32 memory modules. This paper describes the gross architecture of MFIP, and functional and operational features of the high-speed time shared multiframe data bus. Then design concept and manufacture of two image processing units, A and B in the system are presented.
A pipelined image processor has been developed to improve and analyse grey level images from electron microscopes. It uses high speed video processing techniques in an image flow system, where image data can be circulated at variable rates (DC to 10MHz) through a series of firmware processors in a recursive fashion around a digital framestore. The commercial system is proving invaluable in the quality assurance of photoresist material where such processing is revealing features not before visible with traditional techniques. This approach is of general value where standard EM methods have an adverse effect on the specimen under investigation, resulting in poor signal quality.
An integrated document editing and organizing system (IDEOS) has been developed. This system offers various functions required for document processing. It can manipulate various data forms (text, image and graphics) appearing in a document. It has real time image processing functions for the flexible formatting. When formatting a document, all the expression media on the document must be treated as two dimensional image data. To speed up the image processing and for interactive processing, as well, two specially designed hardware units, the image processing unit and the image display unit, have been used. In this system, the document is handled by hierarchical structure and is described by a hierarchical descriptor, which corresponds to the document structure. The effectiveness of this system has been evaluated utilizing two different kinds of applications.
Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.
The purpose of this paper is to compare the flexible, real-time commercial vision systems. First the importance and growth of machine vision for the next decade is briefly introduced. A description of today's vision market follows. The third and last part gives an overview and compares many general purpose and industrial vision systems under several criteria.
Image tube intensified linear and area self-scanned array (SSA) readout detector assemblies are becoming increasingly important for automatic inspection systems and machine vision. Proximity focused diode and microchannel plate (MCP) image intensifier tubes are being used in conjunction with SSAs because they can be electronically gated, they are physically small, they do not introduce image distortion, and for several other reasons 11,12,30-33. Even single photon events can be detected by using high gain MCP image tubes. Image intensified linear SSA (IL/SSA) detector assemblies can now provide successive 100% duty cycle optical samples in time periods as short as 1 ms for up to 1024 linear array pixels with 8 or 12 bit parallel output. Image intensified area SSA (IA/SSA) detector assemblies with, for example, 488 x 380 pixels in the active image area, can be read out in 33 ms. Both IL/SSA and IA/SSA detector assemblies can be interfaced to computers directly, or through conventional data acquisition systems (DASs) 14,15,35,45,48,51. Depending upon the maximum input data rate to the computer, the DAS either operates in the continuous-mode or in the burst-mode. Virtually any type of linear or area SSA can be image tube intensified and computer interfaced using the methods described 38,42. A new 512 or 1024 pixel IL/SSA instrument detector assembly, the F4560, coupled to an HP-85 microcomputer through an HP-6942A DAS has been developed.
A two-dimensional FFT algorithm has been implemented on a pipelined image processor with parallel feedback image data paths. The transform of a 512x512 pixel image, or say 256 images 32x32 pixels in size, can be carried out in one minute (excluding the transpose of the image). The image-processor architecture lends itself very well to some portions of the FFT computation; other portions suggest directions for future hardware development.