We report on the design, design issues, fabrication and performance of a log-polar CMOS image sensor. The sensor is developed for the use in a videophone system for deaf and hearing impaired people, who are not capable of communicating through a 'normal' telephone. The system allows 15 detailed images per second to be transmitted over existing telephone lines. This framerate is sufficient for conversations by means of sign language or lip reading. The pixel array of the sensor consists of 76 concentric circles with (up to) 128 pixels per circle, in total 8013 pixels. The interior pixels have a pitch of 14 micrometers, up to 250 micrometers at the border. The 8013-pixels image is mapped (log-polar transformation) in a X-Y addressable 76 by 128 array.
A 3D-measuring system using the coded light approach yields as primary result a map of code index values. The height is a function of the code index and the lateral point coordinates. One approach to obtain the height values directly is to set up equations using an adequate mathematical model of the sensor. The parameters of this equation, some of them being simple geometrical and optical lengths, must be determined very precisely. If these parameters are not determined with sufficient precision, computation of the height values will not yield satisfying results. Another approach to calculate the height data from code index and lateral coordinates is to set up a direct transformation between them. To determine the parameters of this transformation reference objects with well-known extensions have to be measured. By inserting the known data (code index, lateral coordinates and height data) into the equations the parameters of the function can be determined. A great advantage of this approach is the independence from the modelling of the sensor, e.g. the orientation of the camera can be changed without changing the calibration process; a new calculation of the function parameters is sufficient.
3D-measurement systems, that are based on triangulation, like light sectioning, structured light or stereo vision have a common problem. Certain areas of the object surface are not illuminated and others cannot be seen. In these regions no range data can be acquired and the resulting range map is incomplete. A new measurement setup with structured light is presented reducing this shadow/occlusion problem for many surfaces. For structured light measurement systems several patterns are projected onto the object to be measured. A CCD-camera acquired the images of the object illuminated with structured light patterns. For each pixel of the CCD-matrix there is a sequence of gray-values that corresponds to a light plane of the projector. The position of the pixel in the CCD-matrix determines a viewing beam. The intersection point of the viewing beam and the plane of light yields the three coordinates of a surface point of the object. In the new setup a pattern projector is used together with a camera. With a beam splitter, two color filters and mirrors two beams of different color and different angle of incidence are obtained. Thus, two colored patterns superimpose on the object. A color CCD-camera acquires the image for the range data measurement. Two of the three color channels detect the projection pattern of the corresponding color. A range data image is calculated from the two measurements. A nearly complete range data image result with no or only a few regions without range information.
This paper compares the integration of measurement data with the data-fusion of the same set of data, and shows the superiority of the data-fusion by applying certain information functions and additional transformations to the covariance matrices. The algorithm to be used is the linear Kalman-Filter where the covariances can be computed without any measurement data. The improvement of the data-fusion algorithm will be demonstrated by partitioning the system into a canonical form (independent subsystems), which allows the processing of every set of data on its own (integration of data). In a further algorithm we evaluate the same sets of data, but this time the system models consists of the real physical relationships of the variables and therefore combines the different measurements to the desired estimates (data-fusion). In a first step we compare the estimates of both algorithms and their variances to understand that data fusion leads to more exact estimates. This is because the different measurement variables do not only contain information of the value of 'their own' state variable, but also of the values of other state variables. So this information leads to more exact estimates. To demonstrate the information gain we use Shannon's entropy, which provides a scalar measure of the whole additional information contained in the measurement variables. To obtain the gained information of every single state variable an additional transformation is necessary. This transformation (Karhunen-Loeve) and the fact that Shannon's entropy depends on the coordinate system, finally demonstrates the improvement of the data-fusion algorithm for every single state variable.
In order to extend the information contained in some nonlinearly mapped measurements we create additional measurements our of the really measured data by using the laws of physics. These auxiliary measurements provide extra information and thus lead to more exact estimates. Dealing with the state space model the information contained in the measurements can be measured by Fisher's information matrix, which can be obtained from the Cramer-Rao-inequality. The distribution function of the measurement noise is assumed to be gaussian and the observation model is given by y(k) equals h[y(k),k)] + v(k). We will show that additional measurements Ya(k) can provide additional information when they are nonlinear combinations of the really measured variables. In this case their distribution functions do not remain gaussian and they require an approximation, because any Kalman filter only deals with second order moments. This approximation can be achieved by means of minimum discrimination information. Thus we have created an extended measurement vector which consists of the really measured data and additional pseudoredundant data. We will then show that this extended measurement vector does contain extra information and therefore can be used to get more exact state estimates. The improvement will be shown in an application of an extended linearized Kalman filter to amplitude and phase modulation, where the measurements of the real and imaginary part of the signal are extended by a pseudoredundant phase measurement obtained from the available measurements by a tan-1 operation.
In order to minimize exhaust emissions, modern spark ignition engines have a 3 way-catalyst and an electronically controlled fuel injection. To achieve a high efficiency of the catalyst the air-fuel-ratio must be exactly controlled at all operation conditions of the engine. The electronic fuel injection is often controlled by calculating the amount of fuel to be injected from the estimated air mass in the cylinder. This paper presents a method of data fusing in which all sensor signals that have a relationship to the mass of air in the cylinder are combined using an extended Kalman filter. The use of the redundant information contained in the different sensor signals (throttle air flow, throttle angle, manifold absolute pressure) results in a precise, reliable, and sensor fault tolerant operation of the outcoming algorithm. The use of a manifold pulsation disturbance model results in less phase error in the estimated air mass. Without disturbance model, the phase error increases up to 90 degrees crank angle particularly at high engine speeds and loads.
A new sensor has been developed for pantyhose inspection. Unlike a first complete inspection machine devoted to post- manufacturing control of the whole panty, this sensor will be directly integrated on currently existing manufacturing machines, and will combine advantages of miniaturization is to design an intelligent, compact and very cheap product, which should be integrated without requiring any modifications of host machines. The sensor part was designed to achieve closed acquisition, and various solutions have been explored to maintain adequate depth of field. The illumination source will be integrated in the device. The processing part will include correction facilities and electronic processing. Finally, high-level information will be output in order to interface directly with the manufacturing machine automate.
In autonomous manufacturing systems an important task is to recognize different 3D objects, e.g. workpieces. We have used the fact that different objects scatter laser light from a handscanner differently. To recognize the differences of the scattered light from the objects we have used Fourier transformation. It turns our that by using Fourier transformation we are able to recognize different objects and that this methods is a rather simple and inexpensive way to complement other object recognition systems, e.g. in robotic application.
We present the results of theoretical studies of optical system spatial-range selectivity and show that it is specified by the complex design parameter and the tuning distance. We perform the analytical calculations of the sky background radiation passing through the receiving optical system and consider the possibilities of spatial-range- resolvable control of atmospheric optical parameters and their range distribution reconstruction by passive optical measurements.
We report a birefringent fiber remote strain sensor which is based on the FMCW technique and simply consists of two pieces of elliptical-core single-mode birefringent fiber. The first piece of fiber is used as the lead-in/lead-out fiber which is insensitive to environment, and the second one is used as a strain sensing fiber probe which is sensitive to the strain of itself with 2 microstrain resolution. The advantages of the sensor, such as large dynamic measurement range, long environment-insensitive leading-in/lead-out fiber and long strain sensing probe ar demonstrated in this experiment.
This paper describes a new Sagnac heterodyne interferometric birefringent fiber strain sensor which is based on the frequency modulation continuous wave technique and consists of a single 100 meter birefringent fiber ring. The strain variation of the fiber ring can be measured with 4 microstrain resolution and 5000 microstrain dynamic measurement range. Other advantages of the sensor, such as, simple configuration, simple signal processing and long sensing fiber are demonstrated in this experiment.
The unwrapping of a 2D image in a reasonable time is still an unsolved question. Many algorithms have been proposed in the past, but all of them failed to some extend: either they do not meet the requirements for accuracy or the amount of time required for execution makes them unacceptable because of the time constraints involved in certain kinds of problem. Towers, amongst others, introduced the concept of tiles unwrapping, obtaining good results in a reasonable amount of time. From the idea of dividing the image into tiles, an algorithm for 2D phase unwrapping that allows the achievement of high precision in an acceptable time is proposed.
The paper introduces an optoelectronic/image processing module, OIMP, which enables more convenient implementation of full-field optical methods of testing into industry. OIMP consist of two miniature CCD cameras and optical wavefront modification system which recombines the beams produced by opto-mechanical measurement system and images fringe patterns on the CCD matrices. The modules makes possible simultaneous registration of there monochromatic images as R,G,B components of color video signal by means of signal frame grabber or by VCR on video tape. This enables convenient and inexpensive storage of large quantities of data which may be analyzed by spatial carrier phase shifting method of automatic fringe pattern analysis. THe usefulness of OIMP is shown by two examples: u and v in-plane displacement simultaneous analysis in grating interferometry system and complex shape determination by fringe projection systems.
High precision load estimation during strong transients is still one of the challenges, which have to be solved in modern engine control units. One of the new methods, which deal with the request mentioned above, is an 'extended Kalman filter' (EKF), as described in data fusing for optimization of spark ignition engine control by M. Scherer. As this algorithm requires equidistant sampling of one sample every 45 degrees crank angle, i.e. sampling time of 1.25 ms at an engine speed of 6000 rpm, the hardware- platform as well as the software implementation must meet strict requirements in order to achieve on-line estimation and data processing. Furthermore, the crank angle synchronous manifold pressure and air mass flow pulsations must be considered in the modelling of the EKF. As the phase and amplitude of these pulsations are not constant over all operating points, methods to adjust these, e.g. considering the pulsation in the system model or making use of an on- line-FFT, must be applied in order to avoid large modelling errors. The matrix operations encountered in the algorithm are the most time- consuming operations and for this reason much attention must be paid to efficient software development. This paper presents methods required for an on- line-EKF filter implementation on a specially configured hardware platform and a dynamic elimination of pulsations with varying phase and amplitude.
Connected component labelling is sa fundamental task in intermediate level vision. Current research points to parallel architecture as an excellent solution in response to this problem. In order to exploit a global approach while optimizing the electronic structure and minimizing data propagation, a parallel architecture dedicated to image component labeling is envisages. For an n by n image, the optimized architecture merely requires n/2-1 PE's and n2/4 CAM (content addressable memory) modules through a 4 pixels grouping technique. The global communication is reconfigurable and ensured in O(log n) units of propagation time by a tree structure of switches. Moreover, a PE permits sequential processing in its memories array, perfectly adapted to labeling from any interlaced-mode video signal. In this mode, the architecture permits labeling in one scan image while simultaneously loading the image. The proposed algorithm, based on a divide-and-conquer technique, leads to a complexity of O(n log n) with a small constant multiplicative factor. We discuss the simulation results, the possibility of FPGA implementation and of development of this architecture.
Industrial vision systems are often hindered by system, irregularities such as vibration, product shift and non- constant speed (either conveyor or product) which will set criteria for selecting color detector for machine vision systems. Color line scan camera offers a new tool for industrial process and quality control applications having the benefits of traditional monochrome technology for accurate dimension, shape and texture detection with addition of new intensity independent dimension-color. In some cases color is the only reliable feature. The color separation is the most essential part of a good color cameras. If the three different colors can not be separated successfully from each other, the system can only detect clear and obvious color differences. Separating the image into different spectral bandwidth s increases the intensities dynamic range for each channel compared to a monochrome image. The CCD cameras has to be able to measure the colors accurately even if the light levels are of a small magnitude since correcting erroneous data caused by poor dynamic response from the camera is almost impossible. Often more digitized levels per pixel are needed to deal with in a color image compared with a monochrome image. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. Some information will always be lost when converting integer data to another form. If the numbers used for conversion are too small, the calculation can have an error that is huge enough to make the system fail. Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream.
The ISATEC parallel computer is the first implementation of an instruction systolic array for the commercial market. The goal i\of integration of 1024 processors on an add-on-board for PCs has been achieved by the development of a low- power/low-area processor architecture whose instruction set is suited particularly for image processing applications. The paper introduces the concept of the instruction systolic array, its implementation and some application examples in the field of image processing.
The network-based imaging system, designed for platform independence, uses standard hardware and software to provide optimal flexibility, compatibility, longevity, and reliability. The alignment system described in this article was one of the first and most indispensable tools implemented in the development of the OMEGA upgrade project and has been in continuous daily operation for several years.