This paper presents a real time system for automatic pedestrian recognition using image sequences analysis. We first introduce a robust motion detector of moving objects in an uncontrolled urban environment. This motion detector is based on the stability of the background structure. The verification of the presence of this background structure can be sufficient to detect intrusion (pedestrian, bus, car ...) in the surveillance area. The biggest advantage of this detector is that it is less influenced by illumination changes than other classical detection algorithms. The quality of the detection is improved when the background is high textured. The second step consists in the interpretation of this detection signal which allows us to separate pedestrians from other moving objects. The pedestrian discrimination process has been tested with real traffic image sequences. The percentage of good pedestrian discrimination is about 85%.
In both the public and private sectors, there is increasing demand for systems that take advantage of on-line sensor data and especially real-time video data. In many of these applications, special-purpose sensors, coupled with structured work environments, have made it possible to deploy versions of working systems. Despite these early successes, however, the economics of deployment remain unfavorable. In the security and surveillance domain, the ability to do quick detection and classification of objects can add value to systems monitoring an interior area for intruders or performing outdoor perimeter control. Current-generation motion-detection systems are hampered by their inability to recognize or classify the objects causing the motion. The coupled detection, tracking, and recognition techniques reported in this paper would immediately increase the value of intrusion-detection systems and reduce the personnel needed to man them. This paper reports on progress in three areas: The first emphasizes real-time perception mechanisms involving motion based detection and tracking of figures with a camera having pan-tilt-zoom control. The second component focuses on recognition mechanisms which allow reacquisition of tracked targets when they undergo various kinds of occlusion. It will also be used for discriminating people from other moving agents. The third discusses trends in real-time system performance arguing that the gap between commodity PC platforms and special purpose accelerators is narrowing.
We have designed a neuro-chip for Kohonen learning vector quantization (LVQ) algorithm, and fabricated it by gate-arrays, which includes 12 neurons/chip. We proposed a simplified version for Kohonen LVQ algorithm, because the gate-array restricts the number of transistors. Moreover, the fixed-point calculation is inevitable in neuro-chip. In this paper we demonstrate a good performance of our chip, which is used for bit-pattern image processing. For real-time systems learning can be done in real-time as well as i/o response. The neuro- chip can execute learning procedure (actually, Kohonen LVQ algorithm) in real-time. The first-version chip (already realized) can execute 32 bit patterns, but the second version will be enlarged to 256 bit pattern processing. The neurons become as much as chips are connected to a bus. The demonstration board using the first-version chips includes four chips, i.e., 48 neurons, which corresponds to 48 patterns recognition.
Hartmann-Shack method is widely used in adaptive optics system for wavefront sensing.A data processing system i employed in the sensor for computing wavefront gradient,wavefront reconstruction and control.To get a wide control bandwidth,the system should reach the processing needs in real time.In this paper,a parallel processing architecture called pipeling multiple SIMD(PMSIMD) architecture is presented.In the system,image acqusltion,wavefront gradient computation(WFGC) and wavefront reconstruction computation(WFRC) are conducted in pipeline,at the same time,the SIMD architecture is employed in both of the WFGC and WFRC.The computer simulation experiment shows the system based on the PMSIMD architecture is capable of processing video images of l2SXl28 pixels at 1000 frames per second and the time delay is less than 1/4 of frame period.
Keywords:parallel processing architecture,wavefront of algorithms
In this communication we present an image based object detection algorithm which is applied to intrusion detection. The algorithm is based on the comparison of input edges and temporally filtered edges of the background. It is characterized by very low computational and memory loads, high sensitivity to the presence of physical intruders and high robustness to slow and abrupt lighting changes. The algorithm is implementable on a cheap digital signal processor. It was tested on a data base of about one thousand gray-level CIF-format frames representing static scenes with various contents (light sources, intruders, lighting changes), and neither false alarm nor detection failure occurred. The number of parameters involved by the algorithm is very low, and their values do not require a fine tuning. The same set of parameters performs equally well in different conditions: different scenes, various lighting changes, various object sizes.
In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.
Document image processing has become an increasingly important technology in the automation of office documentation tasks. Automatic document scanners such as text readers and OCR (optical character recognition) systems are an essential component of systems capable of those tasks. One of the problems in this field is that the document to be read is not always placed correctly on a flat-bed scanner. This means that the document may be skewed on the scanner bed, resulting in a skewed image. This skew has a detrimental effect on document analysis, document understanding, and character segmentation and recognition. Consequently, detecting the skew of a document image and correcting it are important issues in realizing a practical document reader. In this paper we describe new algorithms for skew detection and skew correction. We then compare the performance and results of this skew detection algorithm to other published methods from O'Gorman, Hinds, Le, Baird, and Postl. Finally, we discuss the theory of skew detection and the different approaches taken to solve the problem of skew in documents. The skew correction algorithm we propose has been shown to be extremely fast, with run times averaging under 0.25 CPU seconds to calculate the angle on a DEC 5000/20 workstation.
This paper presents an implementation and enhancement of the SMSE (scaled mean square error) filter, using a Hopfield neural network based algorithm. We show the development of the original SMSE filter from the MMSE (minimum mean square error) filter and the PMSE (parametric mean square error) filter, both of which suffer from the oversmooth phenomena. The SMSE filter is more efficient than the PMSE filter in terms of noise removal as it does not take into account all the correlation factors used for image restoration. An adaptive SMSE filter is also presented. The adaptive SMSE filter uses a mask operation technique. A user- defined mask is moved across the image and the filtering parameters are computed based on the local image statistics of the region below the mask. The original and adaptive SMSE filters are implemented using a Hopfield neural network based algorithm. A number of experiments were performed to test the filter characteristics.
Proc. SPIE 2661, Three-dimensional imaging of surfaces for industrial applications: integration of structured light projection, Gray code projection, and projector-camera calibration for improved performance, 0000 (5 March 1996); https://doi.org/10.1117/12.234641
In this paper, two measurement procedures, aimed at the improvement of the performance of an optical whole field profilometer based on grating projection, are presented. The first procedure is based on the Gray code method. It performs a space encoding of the measurement area and yields 3-D range images in which even sharp discontinuities of the object shape can be measured. The second procedure performs the calibration of the optical components of the system, in order to describe the 3-D profile in a global coordinate system, and limiting the use of a reference surface only during the calibration of the profilometer. In the paper, the two procedures are detailed and some interesting experimental results are reported.
Latency time and hardware compactness are two important problems of real-time image processors for moving object tracking. We have developed a compact self-contained real-time image processor that is implemented on a single double-height VME board. The processor can execute major processing steps for moving object tacking during a single video field time. These steps are preprocessing, binarizing, labeling, feature extraction, and feature evaluation. We can obtain sorted feature vectors simultaneously when image data is read out from a sensor. Here a feature vector represents areas, centroid, and maximum intensity of each connected region in a binarized image. Some conventional image processors can execute the above steps individually in real-time and thread some steps in a pixel pipeline manner. However it is difficult to integrate feature extraction and feature evaluation in a pixel pipeline path. For real-time execution of all steps we focused on new architecture particularly for the latter three steps. To minimize the hardware we have developed three ASICs: labeler, feature accumulator, and sorter. To make our processor self-contained and scalable, it has an on- board micro processor, a digital video bus interface, and an RS232C port, and it is VME compatible in bus interface and mechanical dimension.
The companies operating subways are very much concerned with counting the passengers traveling through their transport systems. One of the most widely used systems for counting passengers consists of a mechanical gate equipped with a counter. However, such simple systems are not able to count passengers jumping above the gates. Moreover, passengers carrying large luggage or bags may meet some difficulties when going through such gates. The ideal solution is a contact-free counting system that would bring more comfort of use for the passengers. For these reasons, we propose to use a video processing system instead of these mechanical gates. The optical sensors discussed in this paper offer several advantages including well defined detection areas, fast response time and reliable counting capability. A new technology has been developed and tested, based on linear cameras. Preliminary results show that this system is very efficient when the passengers crossing the optical gate are well separated. In other cases, such as in compact crowd conditions, reasonable accuracy has been demonstrated. These results are illustrated by means of a number of sequences shot in field conditions. It is our belief that more precise measurements could be achieved, in the case of compact crowd, by other algorithms and acquisition techniques of the line images that we are presently developing.
A novel adaptive multilevel classification and detection method that takes into account both spectral and spatial characteristics of the data is proposed. Principal clusters are defined first, and those include background clusters and the predefined target clusters. The classification is done using minimum distance statistical classifier. Here, the main concern is to minimize misclassification rate, by allowing a number of pixels for which the classification confidence is low to remain unclassified at this level. The candidate clusters that are used in the analysis for the unclassified pixels are defined next. The candidate clusters are determined from both the spatial and spectral neighborhoods, using the labels of already classified pixels. Using defined candidate clusters, the mixing model analysis is performed. The linear least squares method to determine the fractions of particular candidate clusters in the corresponding pixel is applied. The results of the mixing model analysis are checked, and if the results of the analysis are satisfactory, the next step is performed. If the results of the analysis are not satisfactory, the candidate clusters list is renewed. After the loop processing has been completed for all pixels in the image, the target detection is performed. That process is based on comparing the estimated quantity of the pixels target endmember and the predefined thresholds. At the end, the detected targets are clustered, and their parameters are estimated. The proposed method was successfully applied to both synthetic and AVIRIS hyperspectral images of the Naval Air Station Fallon.
The work presents a real-time underwater imaging system for identification and tracking of a submarine pipeline on a sequence of recorded images. The main novelty of this work relies on adopting an automatic approach that is entirely based on the analysis and interpretation of visual data, in spite of the various limitations upon the ability to image underwater objects. The analysis of the data is performed starting from image processing operations (like filtering, profile analysis, feature enhancement) implemented on a dedicated board. Then, the system employs an efficient dynamic process for recognizing the two contours of the pipeline. In each frame the system is able to determine the equations of the two straight lines corresponding to the pipeline contours. The system reaches satisfactory performances in real time operation: up to eight frames per second on a Pentium based PC. The results of this work are somewhat more meaningful as the input images were acquired by three cameras, mounted on a remotely operated vehicle travelling at one nautical mile an hour, without any attention either to illumination conditions or stability of cameras. This work is originated from the interest of Snamprogetti in enhancing the level of automation in submarine pipeline inspection.
This paper introduces a new GOS/MAX filter, which has similar performances as other GOS filters, and can preserve the peak of target. A contour matched tracker is employed to identify the target besides tracking. The simulation results show that the GOS/MAX filter is an optimal edge enhancer, and the tracker has a high location precision and good robustness.
Optical aberrations are characterized by orthogonal basis functions composed of discretized Zernike polynomials. The coefficients associated with each Zemike polynomial can be measured using a Phase Diversity wave front sensing technique. Nonlinear optimization techniques are utilized to calculate the Zernike coefficients in a serial manner. Even though this traditional method is attractive, it is computationally a very formidable task to calculate several Zernike coefficients for a given system. Hence the method is not applicable in a real time image reconstruction scheme. In this paper we first show that each Zemike coefficient can be calculated independently in a parallel fashion from each other. Our method uses nonlinear optimization of a single variable only. We use a modified Gonsalves error metric function involving only a single unknown aberration coefficient. Next, we describe an implementation of the algorithm on the IBM SP2 parallel computer. We used the PVM software for parallelizing the computational tasks across the processors in a "master/slave" fashion. We will show that the computation can be performed in an efficient manner using this strategy.
Key Words: optical aberration; Zernike polynomials; Zernike coefficients; phase diversity; parallel computation
In this paper we present a system for high speed pixelwise spectral classification. The system is based on the line imaging PGP (prism-grating-prism) spectrograph combined with the smart image sensor MAPP2200. The classification is implemented using a near-sensor approach where linear discriminant functions are calculated using exposure time modulation and analog summation of pixel data. After A/D-conversion the sums are compared and classified pixels are output from the sensor chip. The theoretical maximum classified pixel rate of the system is around 1 MHz depending on number of classes, etc. In most practical applications however, the limit will be set by the available amount of light.
A recursive algorithm has been developed for LoG filtering. We use an analytic method to obtain the z-transform of LoG function in a rational function form. The structure of the recursive filters is defined by the order of rational functions. The computational complexity of recursive filtering depends on the number of poles and zeros of the transfer function, i.e., on the structure of the recursive filter. It is independent of the size of the filter, and thus has a substantial saving in computation. The algorithm gives a constant computation complexity per pixel. Various images have been tested. A general method of designing high order recursive filters is also given.
A high-speed image registration algorithm which determines the frame-to-frame translational motion in the image sequence is presented. Instead of the exhaustive search block-matching algorithms, we develop here a hybrid search method based on a modified genetic algorithm combined with the SSDA concept and an effective correlation image tracker implemented by software is archived. Experimental results of the tracker using some IR image data are given.