If optical methods are long time ago used in metrology, coming of laser sources has improved drastically the impact of optics in metrology. The progressive existence of more and more industrial optoelectronic components on the market is responsible of the actual introduction of optical technics in industrial processings like interfero-metric control, wide-ranging optical sensors , visual inspection.... Further partial coherence and guiding properties of the light field, non linear optical comportment of the medium offer also interesting metrological applications. The aim of this paper is not to give a full description of all the optical methods used in metrology, but to draw some general specific properties and ideas illustrated by representative applications.
Electronic imaging in the form of television has served the public for over 40 years. Television cameras have up to now used mostly photoconductive tubes to capture the live image. However, the last ten years have seen the advances in microelectronics, including VLSI technology, introduce newer, more flexible ways to record images via the solid state sensor. One version of the solid state sensor, the charge coupled device (CCD) has found its way into consumer electronic imaging, replacing the Super 8 movie system, and today, portable, commercial television cameras are starting to use CCD arrays instead of the more conventional photoconductive tubes. There are several electronic still cameras (ESC) on the market using solid state sensors, but due to the limited number of imaging sites or pixels on these imaging sensors, the quality is far below that of present photographic-based systems. This paper will consider imaging characteristics of photoconductive tubes (solid state imaging sensors) and how they compare to photographic film. Also, based on a system analysis, the requirements for a solid state sensor that will provide image quality equal to current photographic systems will be defined. The importance of digital image processing and digital image compression will also be discussed.
Many solid state image sensors make use of CCD-shift-registers for the charge transport. In a 4-phase-CCD-register with 300 lines, 1200 transport electrodes are necessary. The new concept needs only 600 electrodes. This reduces the light absorption by the transport electrodes. During the illumination- or integration- period the sensor operates as a two-phase-CCD with two electrodes for each line. For the vertical transport the structure is changed from a two-phase-system to a four-phase-system. The complete drive circuit for the transport electrodes is integrated on the sensor chip. That makes the application very easy. The drive circuit includes the possibility to control the integration time. This can be used to match the sensor to different illumination levels or to get a sharp image of fast moving objects.
An image sensor with storage capability is presented. It is sensitive in the visible wavelength region, where exposure times of 5 ms at intensities of 90 gW/cm2 and photon energies of about 2.23 eV give detectable signals of stored information. The device is designed as a matrix with optical read out and is fabricated in silicon technology.
PVDF, mostly investigated for its piezoelectric properties has been also demonstrated as a promising candidate for pyroelectric detection; the challenge is to assess how far its performance can be brought to set the chance of development of PVDF infrared detectors to civilian applications with regard to other conventional pyroelectrics. We have therefore elaborated discrete elements, linear arrays and test structures on 9 to 40 μm thick materials to check the best conditions to be fulfilled in order to design a bidimensional array, with 8 x 8 elements to begin with. The poling process carried out on a biaxially strdched material with a slowly increa-sing a.c. electric field at room tempe5ature (the Bauer's method) insures the largest remanent polarization (around 10 μC/cm2) and its stability with time. Sensibility of freely mounted detectors decreases as an hyperbolic function of the thickness, while the thermal cut off time constant increases up to a limit reached at 25 μm. Influence of the pixel area and spacing between cells on cross talk is also investigated on a multi-units array. Finally, design of a matrix is proposed.
A reasonably simple and accurate method for measuring the spatial resolution of discrete photoelement image sensors was developed. It uses a scanning knife edge as an input pattern. This approach, which is based on rigourous theoretical considerations, gives quicker and more reliable results than the the usual bar pattern method. In addition, it can be fully automated.
We have developed a solid-state camera that very quickly records preselected fields of the total frame of an image. The signals that control the operation of the camera are obtained from a programmed RAM and are rapidly transfered via DMAC to the image sensor. So, programmed row clock operation, pick-up of fields closely one after the other with selected repetition rate and start inhibit time as well as asychronos frame reset and preset of integration time can be performed.
The airborne imaging spectrometer ROSIS has been designed for environmental investiga-tions. The spectral resolution is 5 nm within a spectral range from 400 to 900 nm. The ground pixel size is 2.2 m x 2.2 m at a flight altitude of 4 km. To meet the design and radiometric requirements, the Thomson TH 7884 CCD matrix sensor was selected. Operating the TH 7884 in the frame transfer mode the image size is 500 x 256 picture elements. To gain the required spectral information we need only 85 lines of this image. To achieve the 2.2 m ground pixel size resolution in flight direction (pushbroom system) about 85 frames/second have to be read out. For this we applied new readout techniques as described in this paper.
A simple but efficient method for the parallel processing of line features is developed. Only a 3*3-window multiple scans the image. The usual content-and position-dependent line tracking is replaced by independent local operations. Recursive functions are written and explained for the addition of line features, the evaluation of chord vectors, the dorivinc of difference chain codes and the estimation of the curvature. Direction elements and operations and essential digital features are defined.
We have developed a component labeling processor amenable to pipelining and video-rate processing. The component labeling is widely used in various fields. However, the conventional algorithms have a problem that the operations rapidly increases as the shape of components becomes complex. This problem prevents us from realizing a high-speed processing. In this paper, we discuss an architecture of this processor and the algorithm for solving the problem. Our processor consists of four kinds of processing units. The pre-processing unit simplifies the shape of components to reduce the number of provisional labels. In the provisional labeling unit, each pixel of components is labeled provisionally using the local connectivity of pixels. The label classification unit classifies the provisional labels by the method of searching the label connectivities. In the label update unit, the provisional label of each pixel is replaced by the new label, and each component has been uniquely labeled. By this configuration, the provisional labels are searched directly according to the connectivities of the labels in time proportional to only the first power of the labels. The experimental results verify that we can process 512 x 512 x 8 bit TV images in pipeline at video-rate using our processor.
Traditionally, image warping calculations have been performed on general-purpose CPUs or array processors. Instead, the Warper MKII hardware performs these polynomial warps in a two board set. Because of the many identical instructions, image warping is an ideal candidate for the Warper MKII's dedicated pipelined architecture.
A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.
Some parallel image processing algorithms will be described which are designed and implemented on the parallel processor of the SIMD-type (PPS SIMD) developed at the Institute of Technical Cybernetics in Bratislava. The algorithms of histogramming, local filtering and fast orthogonal transform will be formulated for this parallel computer. Time measurements for some of realized procedures will be also given.
This paper presents the concept of a fast-frame-processor for on-line analysis of fringe patterns which allows the recording of phase-shifted fringe patterns and the successive analysis of the image data by the phase shift algorithm in video real-time. Examples will demonstrate the use of fast-fringe-analysis for holography, speckle technique, moire-and 3D-measuring techniques. Additionally it points out the fast-frame-processor's ability of carrying out nearly all operations needed for the prepro-cessing of images in video-real-time; e.g. filter-operations, contrast enhancement, background correction, averaging and so on. In particular, this system enables true color image processing and color classification in video-real-time.
A complex for astrophisycal applications was designed for the flight of the second bulgarian cosmonaut. The complex named ROZHEN contains astonomical CCD camera with cool sensor and 16-bit onboard computer for data handling and storage of observed images. Th possibility of flexible dialog with the cosmonaut-operator controlling the experiments allows for the higher reliability of the obtained results and for their onboard estimations. Thus we were able to perform many astrophysical experiments and to resolve some astronavigational problems. The distributed processing system and the specialized archi tecture of the complex enabled the efficient onboard two stage compression of the registered images. The telemetric channel provided system performance and express tele-metric control of the archived observations. The architecture, the possibillities and the data from ground-based tests of the complex are discussed in this paper.
Halftone processes convert a graytone image represented by its spatial intensity distribution I(x,y) into a binary (quantized to two intensity levels) image B(x,y). Advantages of binary images are for example: the use of binary working hardware to display the image or reduced capacity to store or to transmit it. Different coding techniques as pulse density (PDM) and pulse width modulation (PWM) adapt the resulting binary images to special hardware conditions.
Colour displays in image processing admit grey scales as well as colour scales for the visual representation of image information. Pseudocolours can be used to enhance the discriminability of different parts of the image. Luminance scales, on the other hand, are preferred for representing fine detail as human colour vision is limited to relatively low spatial frequencies. In the present paper, the generation of quantitative luminance and colour scales with CRT displays is discussed. This includes the colour characteristics of CRTs as well as a recapitulation of the CIE 1931 (X, Y, Z) tristimulus colorimetry system. The CIE 1976 UCS diagram (u', v') is used as an approximation to a perceptually uniform representation of chromaticity. On this basis, it is shown how, for each pixel, luminance and colour (hue) can be chosen independently. In order to add colour as a further visual dimension to greyscale images, a set of equiluminant hues is proposed for each luminance level. The hues are chosen from equidistant steps around a circle in the (u', v') chromaticity diagram. This choice of colours is particularly suitable for the visual representation of a cyclic quantity like phase angle.
Many biological vision systems are efficiently equipped for detection and analysis of the essential perceptual primitive of motion. As technology advances, the ambitions of including this ability in computer vision systems appear more and more realistic. However, to become of practical use, real time performance (in some sense) is required, and the current possibilities for this are still limited. Many different approaches to motion analysis, and in particular to its prerequisite motion detection, have been proposed in the literature. Motion information may be derived from image analysis systems at different levels of the general scheme of image processing and interpretation. However, to achieve a result in terms of motion descriptions, most of these methods depend extensively on image preprocessing (and interpretation) or on integration into an image postprocessing (and interpretation) system. A number of methods are reviewed and evaluated with regard to dependency on supplementary processing and with regard to current potential for real time application. Also we discuss their weaknesses due to problems of ambiguity and noise. However, one can take into account that real time operation also means continuous operation and thereby that a temporal context is provided. This allows concentration on changes most of which are predictable, and savings in computing as well as improved robustness to noise and ambiguities can be achieved. In conclusion we find that high level token matching currently is one of the most promising approaches, and an experimental implementation is used to demonstrate a possible approach to motion analysis in real time.
An iterative scheme for computing the three-dimensional position and the surface orientation of an opaque object from a single shaded image is proposed. This method demonstrates that calculating the depth (distance) between the camera and the object from one shaded video image is possible. Most previous research work on this "Shape from Shading" problem involved the determination of surface orientation only. To measure the depth of an object, a point light source is used. We have estabilished an expression for the image intensity which depends on the light source position, surface orientation of the object, depth of the object, and the reflectance properties of the surface. Assuming that the object surface is uniform Lambertian the measured intensity level at a given pixel becomes the function of the surface orientation and the depth. To solve this non-linear equation, the surface also has to satisfy smoothness conditions. The equation is solved iteratively using standard methods of calculus of variation. The theoretical result is tested by experiments. Three objects (plane, cylinder, and sphere) are used. These initial results are very encouraging since they match the theoretical calculations within 10% error.
Presentation of images on some kinds of electronic displays requires a binary output. Several so-called halftone techniques are used to binarize graytone images. Among these we have studied implicit passive methods. In electronic half toning the images are presented on a discrete raster.
The main problems involved in the reconstruction of 3-D profiles are related to the definition and resolution of a suitable equation set for the calibration of the whole system and to the selection of proper criteria to discriminate whether matched points are confident or not. This paper presents a complete system capable of extracting data and measurements from digital stereo images obtained from aircraft or space sensor. Both area-based and feature-based techniques have been investigated. In order to perform an efficient point matching, the system makes use of criteria based on a modified test, taking into account the shape of the correlation function in a neighbourhood of its maximum. Some experimental results of automatic point matching are reported for different values of the characteristic parameters.
We are proposing a new type of transformation that closely relates to Chord and Hough transform, and which can be very useful in recognition of binary images. In this method we are using lines of various positions and directions, which intersect the area of interest. Each line divides the image into two parts - sectors. Areas of the sectors are assigned to the line, and statistic of the sectors is calculated (for the set of lines). Calculations have shown that this new transformation is insensitive to noise (to a certain extent). Therefore, it can be used for noise insensitive pattern recognition.
In this paper, morphological filters, which are commonly used to process either 2D or multidimensional static images, are generalized to the analysis of time-varying image sequence. The introduction of the time dimension induces then interesting prop-erties when designing such spatio-temporal morphological filters. In particular, the specification of spatio-temporal structuring ele-ments (equivalent to time-varying spatial structuring elements) can be adjusted according to the temporal variations of the image sequences to be processed : this allows to derive specific morphological transforms to perform noise filtering or moving objects discrimination on dynamic images viewed by a non-stationary sensor. First, a brief introduction to the basic principles underlying morphological filters will be given. Then, a straightforward gener-alization of these principles to time-varying images will be pro-posed. This will lead us to define spatio-temporal opening and closing and to introduce some of their possible applications to process dynamic images. At last, preliminary results obtained us-ing a natural forward looking infrared (FUR) image sequence are presented.
A new approach to curve enhancement in gradient images is presented. Its basic idea is a rule-based type of the relaxation process. In order to create rules we define a set of attributes, each of which has an associated set of attribute values. These are used to create the condition parts of the rules. Attributes and their values describe typical constellations of a curve in the vicinity of the central pixel of a window. These constellations are initially generated by a gradient operator followed by a nonmaxima absorption operation. If the actual constellation of a curve in a window matches one of the rules, its action part is activated. This part of the rule concerns the updating of the central pixel. The updating process follows the well-known relaxation scheme: If the constellation indicates a "good" curve the gradient magnitude of the central pixel is to be "reinforced". However, if the constellation is "fuzzy" the magnitude is "weakend". Using an induction algorithm the rule base is transformed into a compact decision tree that controls the relaxation process according to the current constellation. The method was found to perform very fast compared to other algorithms and to roduce thin curves representing the local contrast. Moreover, the method is highly flexible: changing of parameters and/or rules make modifications easy to implement.
The Karhunen-Loeve (K-L) expansion is largerly used in digital picture compression. We present a new algorithm to compute the K-L eigenfunctions and eigenvalues for a Gaussian stochastic process whose time elapses according to an arbitrary law rather than uniformly. These eigenfunctions are proved to be time-rescaled Bessel functions of the first kind having their order depending on the time. The K-L eigenvalues are proved to be the zeros of a linear combination involving the Bessel functions and their partial derivatives of the first order. Also, a study is made of the energy of the time-rescaled Gaussian processes, and we show that the analytical treatment can be pushed up to the cumulants of the energy distribution. Moreover, we have found the relationship between the time-rescaling function and the velocity of a relativistically moving body, that is, we have related the K-L expansion to both the special and the general theory of relativity. This appears to pave the way to a general method for the K-L compression in the digital picture processing of a relativistic source.
The Halley Multicolour Camera on board ESA's spacecraft GIOTTO took more than 2200 images of comet Halley during the flyby on March 13, 1986. The dynamical range of most of the clear filter images is about 1000. The last images show many details of the cometary nucleus and its environment. Due to the spin of the spacecraft the exposure times had to decrease down to 0.3 ms per image pixel in order to achieve the desired resolution. Consequently the later images are more affected by noise contributions. A method is described to improve the image quality by comparison of a particular image with the mean of five neighboring images iteratively. Since all images were taken at different distances, procedures have been introduced to rescale the images to allow coregistration and addition. The dynamic range increases, noise and even calibration errors can be isolated. The complex geometrical distortions of the images caused by the spin of the spacecraft can be corrected. Several images can be combined to achieve one high quality composite image showing detailed structures of the comet.
Three parameters (amplitude, frequency, and phase) describe the behaviour of a coherent noise contribution to an image. Therefore, this noise can be removed yielding a significant improvement of the image quality. Usually FFT (Fast Fourier Transform) methods are used to determine these parameters. For small images the results can be disappointing. If the image size is not a multiple of the dominating noise wavelength then the energy of the noise will be distributed over additional Frequency locations. A technique is described that improves the accuracy of the noise reduction by successive adjustments of the size of the image array to integer multiples of the actual wavenumbers. Prefiltering routines are applied to the images in both the spatial and the frequency domains to optimize the conditions for a non-interactive coherent noise removal. The efficiency of this method is demonstrated on images of comet Halley taken by the Halley Multicolour Camera on board ESA's spacecraft GIOTTO.
The aim of the paper is twofold: firstly defining a method for object extraction from gray-level graphical images; secondly recognizing graphical objects in order to return an automatic description of a given complex drawing. The segmentation uses an histogram mode clustering, which groups the pixels by gray-level intensity in order to define a series of thresholds. A multithreshold method is developed to insert local properties in the multimodal hystogram and to realize an automatic threshold selection. The automatic detection of gray-level images, representing technical drawing, may be simplified by some given drawing criteria. In order to recognize the graphical objects a gool-driven approach is adopted. A structural model is then defined in which the domain knowledge is represented by a semantic network. Finally the semantic network knowledge is used to recognize part or set of parts of a technical drawing, according to a given strategy.
The paper presents improvement achievable in pattern recognition systems based on polar coding of images. The idea of polar coding of images is not a new one. Existing methods can be improved if, instead of radius and angle as the representatives of image boundary, other features are extracted, such as a histogram representing concentric luminous intensity of the image. The histogram is formed in such a way that it shows luminous intensity of concentric rings having center in template's centroid. The number of pixels per ring increases with growing radius and it is the greatest for outmost diameter. Therefore, integration of luminous information per ring must be normalized in order to form a histogram representation of polar image. Binary images are analyzed and generalization for gray scale images is given.
A new generalized invariant moment theory (GIMT) is presented in this paper. The fundamental ideas of GIMT are based on the concepts of generalized image moment and length-invariant under rotational transformation. The generalized invariant moments in this theory can be invariant under translation,scale and rotation and have definite significance. The relation between Hu's theory and the results of ours is discussed. We offer the generalized invariant moments of a simple figure and letter images with computer simulation and discuss the real-time extraction of generalized invarinat moments with a hybrid optical/digital system.
This work describes a statistical function , M(A) , obtained by averaging cross products of the time-spatial interval probability density.In some cases, as for example deterministic low light level signals, M(Li) contains the same information as the autocorrelation function with an important improvement of the SNR.T measurement of the MC can be applied in image classification in those cases which we have the value of g of the reference images.
In this paper, we present a knowledge-based system for segmentation and interpretation of 2-D images, specialized for neural cells recognition, and based on both region and edge information. Cell images are obtained by acoustic microscopy, and are characterized by nonuniform background, and low contrast (expecially for dendrite contours). Neural cells have compact shapes, while dendrites are thin and elongated. Consequently, for cell bodies, region information has proved basic, while edges are used to refine their contours, or to solve incorrect situations. Edges are of major importance for dendrite location, since their shape creates some difficulties to region-based segmentation. Elementary regions are provided by region-growing; edges are obtained by detecting zero-crossings of the second derivative of grey-level behaviour. A symbolic representation of the above primitives is stored inside a Global Database (GDB), and contains information about properties and relations of regions and edges. Procedural knowledge, represented as production rules, and organized at hierarchical levels, is applied by means of a rule interpreter, according to the current problem status stored inside the GDB. As starting point, a region preclassification is performed on the basis of a priori knowledge about neuron and dendrite features; a refinement is subsequently obtained by employing alternatively edges and regions again. The system output is a symbolic map which shows background, cell bodies and dendrites in different colours. Experimental results are presented and discussed.
In this paper, the focus is on low-level processing of SAR (Synthetic Aperture Radar) images, and the eventual goal is automatic classification employing various techniques based on Mathematical Morphology (MM). SAR images are characterized by considerable "speckle" noise, which gives rise to serious problems in early processing (filtering, edge detection). In order to overcome these problems, we have used the MM approach, in particular, operators for filtering such images to reduce "speckle" noise and to enhance straight lines, typical for man-made objects. Edges are detected and thinned to obtain as many continuous and closed contours as possible. Edge-based segmentation is then performed, and various features are obtained for each region. Moreover, we use and discuss MM tools also to compute the fractal dimension around each pixel with an adaptive technique. Finally, the resulting information is merged to achieve the correct splitting of an image into significant regions, each described by an appropriate set of features (shape, texture, skeleton, linear edges by the Hough transform, etc.), which are employed in the next classification step. Experimental results have been obtained by analyzing SAR images of a ground area in Algeria; they are shown and discussed in the paper.
A package simulating the performances of optical systems to be installed on board of space probes is presented. The package is based on the use of 3-D CAD techniques and allows to produce a set of artificially degraded images simulating what should actually see a space probe approaching a target. An animation-film reproducing the approach can be obtained by linking the pictorial device to an aquisition system. A specific application of the package is described.
A combination of an X-ray absorption instrument and an image processing system is described and data on cigarette densities are presented. Using an electro-mechanically controlled positioning unit the automatic analysis of samples larger than the sensor area is possible by sequential imaging storage. The final analysis of the image of the total object results in characteristic functions and parameters which can be related statistically to material specific measurements. This procedure is illustrated with a cigarette tobacco rod.
A complete set of internal structure analyzer of polymers, developed by us, containing a
small angle laser light scattering attachment, a video camera and an IBM PC/XT microcomputer
as well as a packet of software system, has been presented. The 2-D digital image processing
techniques are first applied to process the scattering pattern of polymer and thus
the internal structure parameters of the polymer can be quantitatively determined.
The photocarrier theory has been introduced into the photoelasticity, moire, speckle and holographical interferometries recently. In the paper, the general frame of the photocarrier digital image processing is presented and the system software is shown in program bloc ks. The processing techniques are discussed such as grouping and demodulating etc.. Some new concepts are introduced and the carrier schemes are given. Some methods and the corresponding software system are developed and present a new way of image processing.
Deblurring images affected by severe focus defeat gives often poor results. A coherent optical method, the joint Fourier transform correlation, is suggested for the recognition of defocused images. Basing on the symmetry of the crosscorrelation function between an out-of-focus and the in-focus image, the recognition may be attempted simply correlating the unknown out-of-focus image with a set of reference in-focus images.
Error transferred by shading correction in image signal detection is studied in terms of two models for image formation and nonlinear sensor characteristics. Error due to changes in primary image signal, dark signal and constant field signal was investigated by simulation. It was found that error is not sensitive to the models used, and error can be less than for the linear case. Simulation results can be used for selecting sensor characteristics and useful dynamic range in terms of error tolerance.
Transmission and reception of pictorial information in digital form generally requires high data rates that impact on minimum channel bandwidths, on signal power levels, or both. The advent of optical fibers for signal transmission has reduced channel bandwidth constraints, however signal power requirements have oftentimes limited system performance in practical applications. Coherent signal detection techniques, (similar to those implemented in microwave radio systems), have positively impacted on the performance of optical front-ends used in certain digital receivers. In this paper, the use of multifrequency/multiphase signals for digital image transmission is proposed, in order to achieve high data throughput with moderate bandwidth and power requirements, while implementing coherent demodulation and a "direct bit detection" receiver. The proposed system utilizes 16 tone-quadriphase signals, to produce 64 signals, each representing 6 data bits. For image transmission at 6 bits per pixel, this scheme is attractive because bits represented by each signal, are recovered simultaneously using a novel "direct bit detection" scheme. It involves receiver coherent demodulation, dual correlations, and binary comparison operations made against zero thresholds. This results in a receiver signal processing algorithm that is independent of input signal-to-noise ratio. The receiver's bit error rate is derived and evaluated, and performance plots are presented showing system efficiency.
In X-ray image intensifier (II)/TV-camera systems geometric distortions occur, e.g. due to the curved input screen of the II. For methods which are based on a pixelwise comparison of images, e.g. digital angiotomosynthesis, an accurate correction of these geometric distortions is absolutely necessary. For the application of tomosynthesis to coronary angiography the correction in addition has to be done in real-time, because the recon-struction of the three dimensional structure of the blood vessels has to be done while the patient is undergoing catheterization. This paper describes a digital correction unit which allows a large variety of geome-tric distortions to be corrected. It consists of an input memory for storing the distorted image, an output memory for storing the corrected image and a special address memory which will serve as an address table during the correction step. For each element of the output image the location of the corresponding element of the distorted input image is determined in a preprocessing step and stored in the address memory. The actual correction of an image is then done while the image is copied from the input into the output memory. In this way 512x512 images can be corrected in real-time by a 32-bit 68020-based micro-processor system.
A laser assisted fluorescence method followed by high-speed image processing has been newly developed for the rapid in-situ detection of latent prints on several kinds of surfaces. It is particularly effective for new building materials, leathers and adhesive tapes, sometimes even an extended period has elapsed. The new apparatus has already been used by police department in several countries and has contributed to law enforcement.