In this paper image processing techniques are applied to satellite radar image data. Interferometric phase image is generated by two complex Synthetic Aperture Radar (SAR) images. Fringe lines are the patterns that characterize an interferometric phase image. Clear and continuous fringe lines result to an easy and efficient further processing. An automatic algorithm for extracting continuous fringe lines from the interferometric phase image is presented. Comparison with one existing method from the literature is also made. Experimental results of the algorithm are shown with simulated and real SAR interferometric ERS-1/2 SAR phase image.
Based on the multi-resolution wavelet analysis, three thresholding methods: soft-thresholding, hard thresholding, and the high-pass thresholding are studied in this paper. The proposed high-pass thresholding method is a new effective algorithm for suppressing speckle in synthetic aperture radar (SAR) images. The method suppresses speckle by applying a high-pass function to process the amplitude of each detail image of the wavelet subspaces. The threshold of the function is novel and is computed by the maximal amplitude, the decomposition level p-th of the detail image. Application to SAR images has shown that the wavelet domain filtering methods promise a good tradeoff between speckle removal and edge protection. And the new high-pass thresholding method is more satisfactory in both speckle suppression and detail information preservation, and hence may provide better detection performance for SAR based recognition.
This paper discusses a new algorithm for objective image quality measure. Due to the similarities of discrete wavelet transform (DWT) with the human visual system (HVS), it is possible to take advantage of this nice characteristic to have visual weighted processing to the peak mean square error (PMSE) criterion. Finally, we propose the new algorithm, wavelet and weighted mean square error (WWMSE). Comparing with the PMSE criterion, the new algorithm shows high accordance with image actual perception quality, and overcome the defects of PSME criterion effectively.
In this paper, basing on their multi-fractal characters, the non-gaussian signal is studied in two aspects--modeling and pattern recognition. In the first aspect, we research the multi-fractal wavelet model (MWM) of the non-gaussian signal. There are three methods to describe the non-gaussian signal by the way of the multi-fractal and multi-scale theory in wavelet domain, which are beta-MWM, point mass (pm)-MWM and beta-pm MWM. In the other aspect, we take advantage of the three kinds of water acoustics signal to test the approximate performance of the mixture model and classifier's ability.
Image processing and pattern recognition have been successfully applied in many textile related areas. For example, they have been used in defect detection of cotton fibers and various fabrics. In this work, the application of image processing into animal fiber classification is discussed. Integrated into/with artificial neural networks, the image processing technique has provided a useful tool to solve complex problems in textile technology. Three different approaches are used in this work for fiber classification and pattern recognition: feature extraction with image process, pattern recognition and classification with artificial neural networks, and feature recognition and classification with artificial neural network. All of them yields satisfactory results by giving a high level of accuracy in classification.
A new method, applicable in the case of lacking large-scale topographic maps and Ground Control Points (GCP), using small-scale topographic maps and multi-resolution remote sening images to perform image rectification in the order of image resolution increase, is proposed in this article. Extracting of linear feature instead of point feature to solve orientation elements and implementing least square matching in a much simpler way is also presented.
Finding principal curves in an image is an important low level processing in computer vision and pattern recognition. Principal curves are those curves in an image that represent boundaries or contours of objects of interest. In general, a principal curve should be smooth with certain length constraint and allow either smooth or sharp turning. In this paper, we present a method that can efficiently detect principal curves in complicated map images. For a given feature image, obtained from edge detection of an intensity image or thinning operation of a pictorial map image, the feature image is first converted to a graph representation. In graph image domain, the operation of principal curve detection is performed to identify useful image features. The shortest path and directional deviation schemes are used in our algorithm os principal verve detection, which is proven to be very efficient working with real graph images.
It is an important task for blueprint images on paper to be computerized in architecture, platting and mechanical engineering. In this paper, the intrinsic characteristics of blueprints are carefully examined, and a mathematical method is developed to measure the quality of a digitized blueprint image. Based on the method of lines and blocks primary identification and local area soft boundary brightness and contrast equalization, a new image preprocessing and enhancing algorithm is developed, which can greatly reduce the grainy noises and enhance the lines and blocks information comparing with the traditional processing method, and a high quality image can be offered to the image vectorization processing phase.
The POCS method was original developed in 1960's. It is applied in many fields such as: image processing, signal recovery and optics. The POCS method allows us to incorporate into iteration scheme available information about the experimental data and the measurement error as well as priori constraints based on physical reasoning. It is important to note that the POCS-method doesn't lead to a unique `optimum' solution. The next step to projection is to find a optimal method within a `solution space'. Based on synergetic theory founded by Haken in 1970's, this optimal problem can be resolved by synergetic pattern recognition procedure. In our paper, we propose a synergetic pattern recognition approach to accomplish the optimal processing.
This paper describes the use of semivariogram as a parameter for image comparison which is a commonly used method in content-based image retrieval. The authors first review various applications of spatial statistics to image and signal processing, and recent literature of image comparison, with the emphasis to global image structure description and distance-based image retrieval techniques. The difficulty arising in this field is the definition of image similarity. A new parameter based on semivariogram is putted forward by the authors. Bearing in mind that semivariogram is a parameter not only describes the global structure of a data set but also describes the local continuity of that data set, it is shown in the paper that semivariogram is suitable for global image comparison, and can be used to reveal local features of the image as well. Based on this property, a new index for image similarity is constructed and a practical program using it is developed. By applying the approach to a practical problem, the results show that this approach has the following merits: (a) high sensitivity to structure differences of an image. (b) low computational complexity, and (c) high robustness to lightening conditions.
Analyzing the image geometrical distortion error with mathematical model and its effect on image matching registration. Researching two geometrical correction methods: perspective transformation and affine transformation. Presenting the method of estimating geometrical distortion parameters with the two transformations and the geometrical correction method of distorted image with the estimated parameters. The experiment shows the two methods are practical to image matching. Besides, paper presents some geometrical correction principles.
Based on the model of fuzzy function and the distance from objects to lens, a new 3D adaptive stratified filtering of volume image is developed. The volume image is split into several non-intersected sub-level volume images along the z- axis, each sub-level volume image is regard as a 2D image, and processed by 2D filtering. The problems of boundary division and anti-alias are also solved through introducing alpha value. The efficiency of stratified filtering is also improved, and the error of stratified filtering is less than given a constant P. The experimental results show that this is an effective method.
Metamorphosis or morphing is the process of continuously transforming one object into another, and is popular in computer animation, industrial design, and growth simulation. In this paper, a novel metamorphosis approach is presented for computing continuous shape transformation between polyhedral objects. Metamorphosis can be achieved by decomposing two objects into sets of individual convex sub- objects respectively and constructing the mapping relationship of subsets, this method can solve the metamorphosis problem of two non-homotopic objects (including concave objects and holey objects). The results of object metamorphosis are also discussed in this paper. The experiments show that this method can generate natural, high quality metamorphosis results with simple computation. This method can also be used in font composition and interpolation between two keyframes in 2D and 3D computer animation automatically.
The path-breaking work of Zhabotinsky and a 1968 conference on biological and biochemical oscillators in Prague that feature talks and demonstrations on chemical oscillators and patterns, knowledge of what had now come to be called the Belousov-Zhabotinsky (BZ) reaction and its exotic behavior began to filter to the World. The fantastic periodic oscillation phenomena attract more and more scientists interesting. In this paper, A BZ reaction with illuminating an image on the media surface was designed and the periodic changing of image occurred. Image smoothing (blurring) and image restoration took place alternately. At last the image was blurred totally. A computable simulation of this reaction was applied and found that it is a very powerful implementation for image processing because of its parallel and efficiency. The expected image information can be obtained only from one reaction process.
This paper proposes an IP hierarchy based on 3 X 3 convolution template to construct large-scale image convolution architecture, such as 6 X 6, 9 X 9 or more. It's an aid to speed up the designing for image- processing hardware system. The key hierarchies of 3 X 3 image convolution consist of parallel convolutions and pipelined multipliers. The hierarchies are designed for top- model with structural VHDL and all sub-models with RTL VHDL. The system is divided into some models and connected all after synthesized independently. Cadence and Synopsys are utilized for VHDL simulation and for synthesis respectively in order to obtain the preferable effects.
In this paper, we discuss a method which performs image restoration from an observed image with space-variant degradation. Here, it is assumed that the degradation parameter is not known, though the type of its degradation function is known beforehand. In this case, to restore image from observed one, we propose some new evaluation functions for estimating the degradation parameter. We defined the image entropy and the Laplacian entropy as these evaluation functions by considering the average value. In addition, the normalized entropy was defined by unifying these two entropies. Because of space-variant degradation the partial evaluation area is small and the degradation parameter changes little by little at each evaluation area. By our method the parameter quantity which gives the maximum evaluation value of the normalized entropy is adopted as the estimated degradation parameter value. In actual application at present, the partial evaluation areas to all pixels of an observed image are sequentially set. And, the image can be restored based on the estimated degradation parameters which are obtained by these evaluation functions respectively. Through the results of some simulation experiments, we have already validated the well-restored images. Also, we have been treating degradation images including impulse noise.
The uniqueness of fingerprints has been used for identification for a long time. Automatic fingerprint identification system (AFIS) depends on minutiae to identify a person that rely heavily on the quality of fingerprint image. This paper presents a novel fingerprint enhancement scheme based on a Markov Random Field (MRF). The MRF model is applied to capture local statistical regularities of ridges and then the curve accumulation based on the MRF model is presented to enhance the fingerprint. Such procedure is repeated until the statistics difference can be got between fingerprint ridges and valleys (accumulation). In the end, the adaptive binarisation is made. The results of experiments show that this method can effectively improve the clarity of ridge and valley structures of input fingerprint images and meanwhile preserve the minutiae very well.
Building Multi-DSPs system is an effective way to elevate the system processing ability. In this paper, a VXI-based dual-bus multi-DSPs real-time image processing system is presented. With VXI, the system becomes modularity, easy to modify and extend. At the same time, specified bi-directional high-speed bus groups are adopted to overcome the efficiency rebate produced by bandwidth limit of VXIbus. Performance of this system compared with some other realization is provided at the end of the article.
We have developed a FFT based algorithm for tree rings measurements. First we apply a median circular filter to the rings image. Then we divide the image in lines and process each one individually. Each line is filtered and binarized in windows. The dimensions of the filter and window are related by a formula and vary along the line. The window dimension is based in the period of the wave that best fits in each section of the line.
This paper introduces a gesture recognition method based on HMM (Hidden Markov Model). Because of HMM's special advantage in coping with temporal sequence, we used it on matching input gesture sequence with the typical gesture sequence, it not only get a good correctness and fault tolerance, but also can determine the begin and end of gestures from a series of input images. And using the skin distribution as gesture features got a robust result.
In this paper, we present a method that incorporates k-means and watershed segmentation techniques for performing image segmentation and edge detection tasks. Firstly we used k-means techniques to examine each pixel in the image and assigns it to one of the clusters depending on the minimum distance to obtain primary segmented image into different intensity regions. We then employ a watershed transformation technique works on that image. This includes: First, Gradient of the segmented image. Second, Divide the image into markers. Third, Check the Marker Image to see if it has zero points (watershed lines) then delete the watershed lines in the Marker Image created by watershed algorithm. Fourth, Create Region Adjacency Graph (RAG) and the Region Adjacency Boundary (RAB) between two regions from Marker Image and finally; Fifth, Region Merging according to region average intensity and edge strength (T1, T2), where all the regions with the same merged label belong to one region. Our approach was tested on remote sensing and brain MR medical images and the final segmentation is one closed boundary per actual region in the image.
In this paper, we present an adaptive contrast enhancement (ACE) method in which the contrast gain is determined by mapping the local standard deviation (LSD) histogram of an image to a Gaussian distribution function. The contrast gain is nonlinearly adjusted to avoid noise overenhancement and ringing artifacts while improving the detail contrast with less computational loading. The effectiveness of our method is demonstrated with radiological images.
The binarization of license plate image is one of the key techniques of car license plate recognition (CLPR) system and its results influence the accuracy of the segmentation of characters and their identification directly. In this paper, by analyzing the limitations of Otsu's method and Bernsen's method, a practical method of license plate binarization based on histogram analysis is proposed. In this method, the feature that the percentage of the character area is always less than that of background is presented to distinguish the style of plate. Then a global thresholding method, Doyle's method, is used to threshold the plate image. By counting over 8,000 pieces of plate images, the accuracy is nearly 99%. Only those pictures which are badly polluted or with very low resolution cannot be binarized correctly. The experimental and field-tested results show that our method has higher accuracy, higher speed and better binarization effect. The method has been applied in our CLPR system successfully.
A new pretreatment method and a modified Deutsch thinning algorithm for interference patterns are presented. Based on different characteristics of fringe patterns, different pretreatments are described. The problems of Deutsch algorithm are analyzed and then the modified algorithm is proposed in this paper. Several experimental results are also presented to support the validity of the method.
In this paper, a new combinatorial image enhancement algorithm has been developed based on the statistical characteristic of the infrared image. The computer simulating experiments have proved this new algorithm can solve the problem of low contrast, noise, blurry image edge in the infrared image quite well. Results will be illustrated with a small, representative set of images taken in different condition. Additional, this new enhancement algorithm has been implemented in hardware. The processed images have demonstrated the effectivity of this image enhancement system. The delay time of the whole system it in microsecond(s) level that can meet the need for the real-time infrared image enhancement processing.
In this paper, we introduced several noise removing techniques in the wavelet domain, analyzed the properties of bilateral filtering. Following them, bilateral filtering in the wavelet domain was proposed. With this method, the properties of time-frequency localization and multiresolution of wavelets were used. At last, we demonstrated the efficient noise removing and sharpening of object boundaries and detailed structures by applying this image processing technique to different images.
Based on the fractal theory, this paper uses the SEM images to investigate the texture characters of the machined surface directly by the surfaces own three-dimensional information. For the first time, the Discrete Fractal Brownian Random Field (DFBRF) model is introduced to simulate the surface image, thus maps the grayscale space of the surface image into the surfaces fractal parameter space. This paper discusses the relation between the fractal parameters of the surface image and the characters of the surface texture.
The block discrete cosine transform (BDCT) is the most widely used technique for the compression of both still and moving images, a major problem related with the BDCT techniques is that the decoded images, especially at low bit rate, exhibit visually annoying blocking effects. In this paper, based on Mallets multiscale edge detection, we proposed an efficient deblocking algorithm to further improved the coding performance. The advantage of our algorithm is that it can efficiently preserve texture structure in the original decompressed images. Our method is similar to that of Z. Xiong's, where the Z.Xiong's method is not suitable for images with a large portion of texture; for instance, the Barbara Image. The difference of our method and the Z.Xiong's is that our method adopted a new thresholding scheme for multi-scale edge detection instead of exploiting cross-scale correlation for edge detection. Numerical experiment results show that our scheme not only outperforms Z.Xiong's for various images in the case of the same computational complexity, but also preserve texture structure in the decompressed images at the same time. Compared with the best iterative-based method (POCS) reported in the literature, our algorithm can achieve the same peak signal-to-noise ratio (PSNR) improvement and give visually very pleasing images as well.
For the plankton recognition system, we proposed the new generation system which is a parallel high performance DSP system, the reasons are easily to modify the algorithms and to be developed to another application system. We estimate the performance of low level, intermediate level, and high level algorithm based on DSP. This paper focuses on the new architecture concepts of plankton recognition system architecture and some algorithm optimization on the TMS320C6201 DSP.
Fingerprints are usually compared based on the matching of such features as ridge bifurcations and endings. However, when features are extracted from a thinned fingerprint, pseudo-features are usually introduced as well. In this paper, we propose an approach in which points on a fingerprint ridge can be traced and recorded in a chained list with a 3 X 3 window. based on the proposed approach, an algorithm to remove pseudo-features from a thinned fingerprint image is developed. The algorithm is carried into execution using a library of thinned fingerprint images with such pseudo-structures as spurs, bridges and circles, and it is found that the pseudo-features can be correctly and totally removed with high efficiency.
Proc. SPIE 4552, Kernel adaptive filter (SRSSHF) and quality improvement method for hyperspectral imaging based on spectral dimension recognition and spatial dimension smoothing according to CSAM, 0000 (20 September 2001); doi: 10.1117/12.441522
According to the advanced feature of hyperspectral image and Correlation Simulating Analysis Model (CSAM), a new simple but efficient kernel-adaptive filter (SRSSHF) especially for hyperspectral image is suggested in this paper. It is achieved not based on the traditional sigma (standard deviation) statistics in spatial dimension, but on the valid-pixel judge in spectral dimension and the intellectualized shift convolution in spatial dimensions. So its criteria is based on the intrinsic property of objects by adequately utilizing the spectral information that hyperspectral affords. Such a filter also is an adaptive filter, and its kernel size theoretically has no strong influence on the filter results. What it concentrates is the feature of signal itself but not the speckle noise, its criterion is in spectral dimension, and multiple iteration is available. So the tradeoff of spatial texture is not necessary. It is applied to filter and improve quality of PHI hyperspectral images acquired both in Changzhou, China and Nagano, Japan, and a >200 looks iteration and a comparison with other typical adaptive filters also are tried. It shows that SRSSHF can smooth whole the internal of a homogeneous area while ideally keep and, as well as, enhance the edges well. As good results are achieved, this paper suggests that SRSSHF on the base of CSAM is a relative ideal filter for HRS images. Some other features of SRSSHF are also discussed in this paper.
This paper discussed the approaches of wavelets for multiresulition in image matching. Using the hierarchical structure of wavelet transform, the match process is from the coarsest level to the finest one, refining and condensing the field of matching. Two types of special wavelet transform are explored and compared respectively. For vector-valued wavelets, an improved model of multiresolution matching is constituted to match the successive images in the case of exiting only rotation and translation between images. For complex-valued wavelets, a modified algorithm is presented to match image which has project transform between adjacent frames. Experiments for two groups of image sequences show that vector-valued wavelets and complex-valued wavelets used for image matching are available.
Image Matching is the most important work in DPS(digital photogrammetry system). In the past, grid based image matching is adopted to generate DEM of regular grids in many DPS software. Great success has been achieved, while many problems exist. In this paper, a new matching scheme is put forward, in which image features such as points, lines(edges) and regions are organized in a constrained TIN on left image. For every vertex of TIN, possible matching candidates are searched. At last combination optimization is done to determine the true matched points for every vertex.
This paper addressed the problem of image matching with two images, an object image and a novel deformed image. Generally two images have differences of rotation, translation, scaling, noise disturbance, or occlusion. A common task is to develop matching algorithms insensitive to image distortions. Here we inherited and extended the spirit of eigenspace approach, applying a 3-layer BP neural network to find the mage pattern which is also obtained from the same scene of the object image pattern. To testify the feasibility and robustness of our algorithm, images obtained from real scenes by satellites were used in our experiments. Experiments have shown that the framework could produce feasible results.
Matching and parameter estimation of two point patterns related through an affine transform is a very important research topic in the field of computer vision. The key step is to find some invariant features. As contrast to traditional techniques employing geometric moments or cross- ratios, the unique and orthonormal coordinate system, i.e. the eigen one, whose polar radiuses are invariant to an affine transform, is considered. Then, by using SVD, it is shown that after using a whitening transformation, the two point patterns related through an affine transform are respectively mapped to the eigen point patterns related through a rotation transformation. Based on this, an algorithm exactly estimating the affine parameters and correctly determining point correspondence on the condition that the only a priori knowledge is that the two patterns are corresponding in pattern-to-pattern way is developed. By fusing it with a robust estimation technique, which utilizes randomly sampling minimum redundant subset concept, K- RANSAC-wise architecture, joint criterion of maximum matching point pair support and minimum matching error, and linear optimal refinement procedure, we develop the robust version of the algorithm. The experiments have demonstrated that the proposed algorithm is very robust, exact and efficient.
This paper presents techniques for constructing full view panoramic mosaics from sequences of images. The goal of this work is to remove too many limitations for pure panning motion. The best reference block is critical for the block- matching method for improving the robustness and performance. It is automatically selected in the high- frequency image, which always contains the plenty visible features. In order to reduce accumulated registration errors, the global registration using the phase-correlation matching method with rotation adjustment is applied to the whole sequence of images, which results in an optimal image mosaic with resolving translational or rotational motion. The local registration using the Levenberg-Marquardt iterative non-linear minimization algorithm is applied to compensate for small amounts of motion parallax introduced by translations of the camera and other unmodeled distortions, when minimize the discrepancy after applying the global registration. The accumulated misregistration errors may cause a visible gap between the two images. A smoothing filter is introduced, derived from Marr's computer vision theory for removing the visible artifact. By combining both global and local registration, together with artifact smoothing, the quality of the image mosaics is significantly improved, thereby enabling the creation of full view panoramic mosaics with hand-held cameras.
Image matching is the crucial technique of stereo analysis, is also one of the most difficult problems in digital photogrammetry and computer vision, in urban area, because of a large amount of poor-texture areas, discontinued features, occlusion and partly-occlusion between objects increase the extent of difficulty of this problem. To improve the quality of image matching in urban areas, a relaxation approach of segment-based stereo matching is proposed. In this way, line segments are extracted from image by heuristic tracing extraction technique, then the difference of orientation, gradient of line segments and the amount of overlap of segments in the stereo images are calculated and used as the initial probability estimate for each segment match between the stereo pair, then, with the help of constraint of disparity continuity and constraint of consistence of topology, the relaxation process is used to correct the initial probability iteratively until it converge to a steady situation, and the globally optimal match is obtained. In the paper, a brief scheme of probability relaxation for the problem of segment-based stereo matching was firstly given, then the implementation of relaxation stereo matching was described in detail and finally the experimental results and conclusions were presented.
Autonomous real-time fingerprint verification, how to judge whether two fingerprints come from the same finger or not, is an important and difficult problem in AFIS (Automated Fingerprint Identification system). In addition to the nonlinear deformation, two fingerprints from the same finger may also be dissimilar due to translation or rotation, all these factors do make the dissimilarities more great and lead to misjudgment, thus the correct verification rate highly depends on the deformation degree. In this paper, we present a new fast simple algorithm for fingerprint matching, derived from the Chang et al.'s method, to solve the problem of optimal matches between two fingerprints under nonlinear deformation. The proposed algorithm uses not only the feature points of fingerprints but also the multiple information of the ridge to reduce the computational complexity in fingerprint verification. Experiments with a number of fingerprint images have shown that this algorithm has higher efficiency than the existing of methods due to the reduced searching operations.
The scene matching between side-looking real aperture radar (SLAR) image and synthetic aperture radar (SAR) image is influenced by the terrain height variance because radar imaging in the direction of slanting range. The match algorithm investigated on this paper between real aperture radar image and SAR is based on normalize cross correlation match. In this paper, through analysis the geometric model of side-looking real aperture radar imaging, we propose a method of vertical projection adjust the geometric distortion of side-looking real aperture radar image, and discuss the relation between match performance and the height difference in reference region, which is useful for the evaluation of radar image matching reliability. The simulation experiments of real aperture radar imaging at different altitude proves that the matching precision and robust are improved distinctly after the vertical projection adjustment of the real aperture radar image geometric distortion.
By making use of the characteristic of periodic variation of the discrete wavelet transform coefficient with translation,an adaptive scene matching method based on wavelet multi-scale representation is proposed. This method has eliminated mismatching due to the sensitivity of the discrete wavelet transform to translation. With respect to matching criterion, attention is mainly given to high frequency components and the matching weight coefficients can be adaptively adjusted according to the energy distribution of image high frequency components at different scales and in different directions so as to focus on matching of different structures and in different directions. It has been proved by experiment that, other things being equal, the matching method presented here is better than both the classical gray level correlation matching and edge magnitude matching in terms of the rate of correctness of matching, anti-local gray level reversal and imaging condition variation.
To solve the matching problem about the images got from different sensors, we present a new method based on intensity-based correlation. After analyzing the real match position and false match position from the correlation surface, we found a new method to search the real match position based on the feature of the peak on the correlation surface. Feature used in this method includes relative height of the peak, width of the peak and distinctness degree of the peak. Experiments show that this method is effective on the condition of proper sensed image size and resolution.
In this paper, the Fourier translation of 1-Dimension continuous signal is used to analysis the different frequency spectrum. The convolution of two signals can be expressed as the convolution of Fourier series of these two signals¡¯. After some translations and ignoring some secondary factors, the formula shows that in the correlation of two signals, the higher frequency of the images causes the narrow peak on the surface of the correlation. Comparing the original image and the correlation surface, we found the narrow peak on the correlation surface indicated the real matching position. All these shows that the stable unchanged feature usually contain in the higher frequency of the different images. To solve these matching problem of multi-spectral images, two methods are proposed, one is to do pretreatment (enhance the high frequency of the images to eliminate inconstant factors), the other is to search the narrow peak on the correlation surface. All these two methods are the same effect to locate the real matching position.
In the farther experiment, we found the matching problem not only between different spectral images, but also between the same spectral images but got at different time and different conditions had the same principle. The frequency analysis method can be extended to the problem of heterogeneous images matching.
Binocular machine vision has been explored for so many years, but the most difficult problem and the obstacle of the system processing is the matching procedure. This thesis will give a new technique to obtain matching points of the left and right image in the binocular active vision system, without searching the whole image, or even the whole feature curve. So the time-consumed computation is reduced considerably. And the mismatching errors are reduced too. In this technique, except the epipolar constraints, we add two strong constraints into the system: Adherent-Mark, Grid (row and column). Using a new method of Grid-Coding, matching point is easy to be found.
The imaging information and feature for the infrared image and optical image are investigated firstly, and fuzzy feature extraction of image is presented in this paper. Then the relationship between two images is described. Through analyzing the images, a description-based fuzzy feature matching algorithm is proposed. Experiment results using infrared image and optical CCD image are presented.
The algorithms to obtain sub-pixel accuracy in image matching was discussed. The resampling and surface fitting methods characteristics was analyzed. The following improvement was made to alleviate the computation burden: At first, only the model is needed to be resampled n-times; Next, (2*n-1) sub-models are generated; Again, the NCs among each sub-model and the image are calculated; At last, the maximum among the sub-model is chosen and the shifts corresponding to this sub-model are the sub-pixel displacement required. Then a new algorithm was proposed to combine the resampling and surface fitting methods. The effectiveness of the proposed algorithm was validated by the experiment.
In this paper, a fuzzy matching algorithm for recognizing primitives in hand-drawn graphical symbols is presented. By primitives we mean the frequently used sub-graphic units in the certain graphic symbols set. The recognition process is performed through local and global relation calculation, fuzzy rules are adopted in the process to describe the relation between basic geometric lines. As a middle layer of a hand-drawn graphic symbol recognition system, primitives recognition can greatly reduce the searching space of graphic symbols matching and improve the performance of whole system.
The fractional Fourier transform is the powerful tool for time-variant signal analysis. For space-variant degradation and non-stationary processes the filtering in fractional Fourier domains permits reduction of the error compared with ordinary Fourier domain filtering. In this paper the concept of filtering in fractional Fourier domains is applied to the problem of estimating degraded images. Efficient digital implementation using discrete Hermite eigenvectors can provide similar results to match the continuous outputs. Expressions for the 2D optimal filter function in fractional domains will be given for transform domains characterized by the two rotation angle parameters of the 2D fractional Fourier transform. The proposed method is used to restore images that have several degradations in the experiments. The results show that the method presented in this paper is valid.
Image reconstruction in electrical impedance tomography (EIT) is a highly ill-posed, non-linear inverse problem. A new regularization method based on the spatial filtering theory to reconstruct the impedance distribution in EIT is proposed in this paper. The new regularized reconstruction for EIT is independent on the estimation of impedance distribution, so it achieves a lower implementation complexity than the maximum a posteriori (MAP) regularization method. The regularization level in our new method varies spatially so as to be suited to the correlation character of the object's impedance distribution. The computer simulation results indicate that the regularization method based on the spatial filtering theory performs better than Tikhonov regularization method in solving the ill-posed problem of dynamic EIT.
The process of selecting a small number of representative colors from an image of higher color resolution is called color image quantization. The ultimate goal of color image quantization is to minimize visible distortion. While its application as a frame buffer technique requires that algorithm efficiency is crucial. In this paper, a significantly faster quantization strategy than previous methods: median cut, variance, or octree-based algorithms, etc., is suggested. The new perceptually algorithm integrated with gamma correction produces result approximately as accurate as previous methods. Overall, the new proposed method is a preferable tradeoff between the quantizer complexity and visible distortion of the quantized image.
Based on the nonlinear property of the moving grating in BSO crystal in four-wave mixing architecture at large fringe modulation, i.e. the enhancement of the reflectivity increases as the incident beam ratio increases and the grating with large pump beam ratio can obtain higher enhancement of the reflectivity, we achieve edge-enhancement and edge-enhanced optical correlation of a binary optical image by applying the moving grating in the Fresnel transform four-wave mixing system. The relative intensity of the edge of the object is enhanced nearly 2 times. The Full- Width-at-Half-Maximum of the auto-correlation peak and the fluctuation noise if obviously suppressed, which indicate that a significant improvement in the discrimination capability of the correlator is achieved.
This paper presents a novel fast images matching method that based on image projection features and using ARTMAP neural network. Compared with the correlation algorithm, the correct matching probability, the matching time and the ability to resist noise of, this method are improved all. Adopting it in SMGS (scene matching guidance system) can improve the level ofreliability and real time ability of SMGS.
This paper proposes an integrated set of solution blue print on computer aided automatic form processing. By using the linear whole block moving method in each vertical segment, a new fast algorithm is put forth to detect and rectify the slanting image. To distinguish the different form types, which is the foundation of steps of locating the form fields, filtering the form lines, and so forth, several representative form features are discussed. Based on the bank bill image feature, a mutual rectification mechanism based on the recognition results of financial Chinese characters and Arabic numerals is put forward to raise the recognition rate. At last, it presents experiment results and conclusions.
General algorithms of contour tracing are just for binary image, and fail in some complicated images. These algorithms are correct just in some ordinary cases. The reasons of failing are always the lack of theory and the pixel is not enough for contour tracing. The theory of crack brought by A.Rosenfeld is very useful for contour tracing. But the crack is not intuitionistic. So, a new pixel-based algorithm of contour tracing for multi-value segmented image is presented by using the character of crack. After analyzing all cases that maybe occur in contour tracing, this paper summarized a succinct theory. As we know, the contours maybe superpose and intersect, so the same pixel maybe occurs several times in contour tracing, but the corresponding crack is unique, if we find the crack, then the corresponding pixel is found. That is the idea of the paper. The new algorithm is easy and accurate compared the traditional algorithms. In addition, this paper analyzed all relations that maybe occur between regions, pointed a effective algorithm of analyzing the inclusion relations among the regions to build the tree structure. The former algorithms fail in analyzing the tree structure of multi-value segmented image, but the algorithm is effective in all case. Experiments show that these algorithms are correct and high-effective.
This paper proposes a new image enhancement method based on fuzzy assemble theory, resulting in a good vision effect. The method has a good impact on detail description comparing with traditional methods based on fuzzy assembly theory. It obviates the loss of information caused by traditional method.
In this paper, based on the volume holographic storage in a photorefractive crystal, a new scale-invariant pattern recognition system, with the wavelet transform, has been set up. The wavelet filter can increase the discrimination capability of the correlator. However the wavelet-filtered image is edge-enhanced, the phase-only logarithmic radial harmonic (LRH) filter is not suitable for such image when regarding the scale invariance. The LRH filter is modified to achieve scale invariant pattern recognition. Simulation result validates the theory.
We proposed a method for 3D shape measurement of moving human body by use of stereo vision technique. To resolve the matching problem of two images, we use the characteristic of the independence of red, green and blue in color space and propose a color-coded technique used to search the corresponding points of two images in the measurement. Using two sets of optical imaging systems we can obtain the color images of a moving human body in different directions. The corresponding points of two images detected with CCD cameras are found out by computer image processing. The 3D information of a moving human body can be obtained based on the parallax of the corresponding points in the two images. By this means, not only can the 3D shape of human body be measured but also the moving action of human body can be tracked.