The traditional frame accumulation technology can effectively reduce the random noise and improve the signal-to-noise ratio in the image. A frame accumulation technology with superimposed sawtooth-shaped-function can reduce the quantization noise and increase the grayscale resolution. However, in the case of continuous operating environment, it is not suitable because constant amplitude light used must be turned off for taking pure and shadowy images, which are generated by superimposed shaped light. To solve this problem, we propose an improved method named dual optical signals with different periods (DOSDP) to improve grayscale resolution of an image. The method does not need to capture signals twice. In other situations, where it is not appropriate to directly superimpose a shaped function optical signal to an object, a piece of glass is installed on the light path between the camera and the object so that the superimposed light can enter the camera through the glass reflection. Furthermore, our method allows superimposed signals to be nonlinear. Experimental results show that DOSDP method can effectively improve quality of the image.
A medical endoscope system combined with the narrow-band imaging (NBI), has been shown to be a superior diagnostic tool for early cancer detection. The NBI can reveal the morphologic changes of microvessels in the superficial cancer. In order to improve the conspicuousness of microvessel texture, we propose an enhanced NBI method to improve the conspicuousness of endoscopic images. To obtain the more conspicuous narrow-band images, we use the edge operator to extract the edge information of the narrow-band blue and green images, and give a weight to the extracted edges. Then, the weighted edges are fused with the narrow-band blue and green images. Finally, the displayed endoscopic images are reconstructed with the enhanced narrow-band images. In addition, we evaluate the performance of enhanced narrow-band images with different edge operators. Experimental results indicate that the Sobel and Canny operators achieve the best performance of all. Compared with traditional NBI method of Olympus company, our proposed method has more conspicuous texture of microvessel.
An approach for impulse noise detection and removal in color images based on Moran’s I (MI) statistic is proposed. The proposed method consists of detection and removal components and is called Moran’s I vector median filter (MIVMF). The detection module is able to determine if a pixel is noise or noise-free. If it is a noise pixel, the vector median filter (VMF) will be used to remove the noise. This detection capability meets the so-called “switching” mechanism, which only selects noisy pixels for denoising. Hence, this proposed filter will expedite the processing time with the reduced number of vector calculations in the VMF due to this detection function. This type of detection is achieved with MI index and the indication of one-dimensional Laplacian kernels. We compare the proposed MIVMF with other well-developed vector-type median filters in the literature. Our experimental results show that the proposed filter is not only faster in the filtering process but also efficient in removing random impulse noise with different noise levels in color images. The MIVMF demonstrates a promising denoising result based on the criteria of peak signal-to-noise ratio and structural similarity index metric. With the visualization of processed images, the MIVMF can avoid image blurring, preserve the edge details, and achieve superior noise reduction.
Limited resolution, blurring, warping, and additive noise associated with image acquisition and storage often make the barcode image degraded, generating low-resolution (LR) barcode images. Barcodes in degraded LR images are difficult to recognize. The goal of this paper is to introduce the potential of super-resolution (SR) technique in conquering the aforementioned challenges, and a variational Bayesian SR method is proposed in this work. Different from the previous work, here, the high-resolution barcode image is estimated through its corresponding posterior probability distribution. The barcode image is made of some homogeneous regions separated by sharp edges, and sometimes it is anisotropic. A universal prior probability distribution was proposed for the barcode image by considering these characteristics. Mathematically, the efficiency of this prior distribution is demonstrated, which can preserve sharp edges and suppress artifacts in the reconstructed barcode images. Moreover, by using the variational Bayesian framework, the motion parameters and hyperparameters can be estimated accurately and efficiently, ensuring the success of the SR technique. In order to overcome the difficulty caused by nonlinearity, the Taylor expansion method is introduced to solve the proposed SR problem. Eventually, the simulated and real data experiments show the encouraging performance of the proposed SR method. It increases certainly the reconstruction quality, and could be considerably robust against blur and noise. It is believed that the variational SR technique in the barcode auto-identification technique should open a further perspective of coping with technology challenge.
We propose an innovative and efficient approach to improve K-view-template (K-view-T) and K-view-datagram (K-view-D) algorithms for image texture classification. The proposed approach, called the weighted K-view-voting algorithm (K-view-V), uses a novel voting method for texture classification and an accelerating method based on the efficient summed square image (SSI) scheme as well as fast Fourier transform (FFT) to enable overall faster processing. Decision making, which assigns a pixel to a texture class, occurs by using our weighted voting method among the "promising" members in the neighborhood of a classified pixel. In other words, this neighborhood consists of all the views, and each view has a classified pixel in its territory. Experimental results on benchmark images, which are randomly taken from Brodatz Gallery and natural and medical images, show that this new classification algorithm gives higher classification accuracy than existing K-view algorithms. In particular, it improves the accurate classification of pixels near the texture boundary. In addition, the proposed acceleration method improves the processing speed of K-view-V as it requires much less computation time than other K-view algorithms. Compared with the results of earlier developed K-view algorithms and the gray level co-occurrence matrix (GLCM), the proposed algorithm is more robust, faster, and more accurate.
Although various image denoising methods such as PDE-based algorithms have made remarkable progress in the past years, the trade-off between noise reduction and edge preservation is still an interesting and difficult problem in the field of image processing and analysis. A new image denoising algorithm, using a modified PDE model based on pixel similarity, is proposed to deal with the problem. The pixel similarity measures the similarity between two pixels. Then the neighboring consistency of the center pixel can be calculated. Informally, if a pixel is not consistent enough with its surrounding pixels, it can be considered as a noise, but an extremely strong inconsistency suggests an edge. The pixel similarity is a probability measure, its value is between 0 and 1. According to the neighboring consistency of the pixel, a diffusion control factor can be determined by a simple thresholding rule. The factor is combined into the primary partial differential equation as an adjusting factor for controlling the speed of diffusion for different type of pixels. An evaluation of the proposed algorithm on the simulated brain MRI images was carried out. The initial experimental results showed that the new algorithm can smooth the MRI images better while keeping the edges better and achieve higher peak signal to noise ratio (PSNR) comparing with several existing denoising algorithms.
We applied a new texture segmentation algorithm to improve the segmentation of boundary areas for distinction on the
liver needle biopsy images taken from microscopes for automatic assessment of liver fibrosis severity. It was difficult to
gain satisfactory segmentation results on the boundary areas of textures with some of existing texture segmentation
algorithms in our preliminary experiments. The proposed algorithm consists of three steps. The first step is to apply the
K-View-datagram segmentation method to the image. The second step is to find a boundary set which is defined as a set
including all the pixels with more than half of its neighboring pixels being classified into clusters other than that of itself
by the K-View-datagram method. The third step is to apply a modified K-view template method with a small scanning
window to the boundary set to refine the segmentation. The algorithm was applied to the real liver needle biopsy images
provided by the hospitals in Wuhan, China. Initial experimental results show that this new segmentation algorithm gives
high segmentation accuracy and classifies the boundary areas better than the existing algorithms. It is a useful tool for
automatic assessment of liver fibrosis severity.
Medical images such as MRI images normally have smooth edges and round corners, but the existing general edge-preserving smoothing algorithms often result in coarse edges and sharp corners. A new image smoothing algorithm, based on a filter utilizing the weighted average of 21 average pixel values in 21 neighborhood subregions in a 3x3 square around the center pixel, is proposed to deal with the problem. Subregions with the center point being smoothed on its sharp corner are avoided from selection to make the edges being preserved smooth and round. During the smoothing process, each subregion is assigned a weight according to its homogeneousness which is evaluated by the variance of its pixel values. The weighted average of averages for all subregions is assigned to the center pixel being smoothed. The more homogeneous neighborhood has more influence on the center pixel, so that the edges can be well preserved. But contributions from all neighborhoods are also taken into consideration especially when their homogeneousness is largely equal so that the resultant areas are smoother. An evaluation of the algorithm on the simulated MRI images is carried out. Experimental results showed that the new algorithm can smooth the MRI images better while keeping the edges better compared with the existing smoothing algorithms.
Water-Level model is an effective method in density-based classification. We use biased sampling, local similarity and popularity as preprocessing, and employ a merging operation in the water-level model for classification. Biased sampling is to get some information about the global structure. Similarity and local density are mainly used to understand the local structure. In biased sampling, images are divided into many l x l patches and a sample pixel is selected from each patch. Similarity at a point p, denoted by sim(p), measures the change of gray level between point p and its neighborhood N(p). Besides using biased sampling to combine spectral and spatial information, we use similarity and local popularity in selecting sample points. A sample point is chosen based on the minimum value of sim(p) + [1-P(p)] after normalization. The selected pixel is a better representative, especially near the border of an object. To make it more effective, one has to deal with small spikes and bumps. To get rid of the small spikes, we establish a threshold |[f(P1)-f(P2)]*(P1-P2)| > c*l*l , where c is a constant, P1 is a local maximum point to be tested and P2 is the nearest local minimum from P1. The condition is only related to the size of the patches l*l. The merging operation we include in the model makes the threshold constant less sensitive in the process. DBScan is combined with the enhanced water level model to reduce noise and to get connected components. Preliminary experiments have been conducted using the proposed methods and the results are promising.
Water level model is an effective method in density-based classification. To improve the result, we use biased sampling, local similarity and popularity as preprocessing, and then apply the water-level model for classification. Biased sampling is to get some information about the global structure. Similarity and local density are mainly used to understand the local structure. In biased sampling, images are divided into many l*l patches and a sample pixel is selected from each patch. Similarity at a point p, denoted by sim(p), measures the change of gray level between point p an its neighborhood N(p). Besides using biased sampling to combine spectral and spatial information, we use similarity and local popularity in selecting sample points. A sample point is chosen based on the minimum value of sin(p) + [1-P(p)] after normalization. The selected pixel is a better representative, especially near the border of an object. Kernel estimators are employed to obtain smooth density approximation. The water-level model is relatively easy and effective when the density function is smoothed. To make it more effective in other cases, one has to deal with small spikes and bumps. To get rid of the small spikes, we establish a threshold ê[f(P1) - f(P 2)*(P1-P 2) ê > c*l*l , where c is a constant, P1 is a local maximum point to be tested and P2 is the nearest local minimum form P1. The condition is only related to the size of the patches l*l. After using the average filter, we choose l to be the square root of the fifth peak if it is between 5 and 20, otherwise set l = 10. Preliminary experiments have been conducted using proposed methods with different values of the constant c in the threshold condition. Experimental results are provided.
Human interpreters are very sensitive to spatial information in supervised classification. A well-known isodata algorithm in unsupervised classification requires many parameters to be set by human being. Some other unsupervised algorithm focuses on spectral information, but spatial information is lost in the process. Biased sampling is one good approach to get some information about the global structure. For local structures, many techniques have been used. For example similarity and local density are discussed in many papers. In biased sampling, images are divided into many <i>l</i> <i>x</i> <i>l</i> patches and a sample pixel is selected from each patch. Similarity at a point p, denoted by <i>sim(p)</i>, measures the change of gray level between point p and its neighborhood <i>N(p)</i>. In this article we introduce a method to use biased sampling to combine spectral and spatial information. We use similarity and local popularity in selecting sample points to get better results. To use similarity (<i>sim(p)</i>≤δ), one must determine δ. One way is to make it adapted such that a sample point can be selected from each patch. Here after normalization, we choose a sample point with a minimum value of [equation] for some positive numbers α and β. There is no precondition for δ needed and the selected pixel is a better representative, especially near the border of an object. Kernel estimators are employed to obtain smooth density approximation before final classification. Some experiments have been conducted using the proposed methods and the results are satisfactory.
A new approach for automatic battle tank recognition and segmentation has been developed. This paper presents the design and implementation for this new algorithm. The main ideas, approaches, limitations, and possible extension for future work are also discussed. This approach consists of three phase. In the first phase (foreground and background separation), it discriminates the foreground targets from background based on the feature data such as gray (or color) levels, and statistical data such as gray levels distribution and histogram. In the second phase (preliminary individual target recognition), each individual target is detected by a region growth algorithm. Each possible target is reconstructed. In the third phase, the targets are recognized by syntactic analysis. The synactic analysis (in last phase) is to extract all basic components of a tank and determine the relative relationship among the components based on the analysis of the waveform of boundary distance function from the centroid. The experiments show very satisfactory result.
Artificial neural networks (ANN) constitute a powerful class of nonlinear function approximates for model-free estimation. Neural network models are characterized by topology, activation function and learning rules. The wavelet neuron model is obtained by replacing an activation function with wavelet bases in the traditional neuron model. The wavelet is a localized function that is capable of detecting some features in signals. A wavelet basis function is assigned for each neuron and each synaptic weight is determined by learning. Wavelet neural networks are used in this study to process remotely sensed data and classify soil based on its moisture content. To evaluate the effectiveness of the wavelet neural networks, a soil moisture data set consisting of 750 vectors, each with three components (surface temperature, brightness temperature at L-Band (TB-L) and at S-Band (TB-S)) and some remotely sensed images are evaluated in the experiments. A comparison with Backpropagation networks is investigated for the supervised training of remotely sensed data and classification of soil moisture.
A new algorithm is developed to implement automatic eye tracking without prior reference model and prior knowledge of size, orientation, shape, color and other data for the human eyes. The algorithm is based on the analysis of the eye features in eye contrast, eye blinking, and other properties. It consists of two stages. In the initialization stage the algorithm locates the approximate head location from two consecutive video frames. The size of three same size blocks is determined. They are used to detect the left and right eyes. Two eyes are symmetric and blink simultaneously at all the time. The algorithm extracts the similarity features of two eyes and dissimilarity of eyes from the region between eyes which is represented by middle block. The measures are implemented by the analysis of correlation and horizontal contrast property. The algorithm is able to detect the eye status of blinking eyes and closed eye for a period of time in a video frame sequence. This algorithm is a dynamic automatic eye tracking system that can adapt the environment change and reinitialize if the tracking is lost. The experiments of this method show satisfactory results in term of accuracy and reasonable time complexity. It shows that the method can be applied to eye tracking regardless of skin color, orientation of head, size of head, background changing, or other constraints. The experiments are conducted in targeting to a moving head by 20 frames/second video sequence.
This paper presents a morphological polynomial approach to the presentation, shape decomposition, and object pattern recognition for machine vision. The polynomial morphological approach is a powerful method for image processing and pattern recognition. The algorithms of shape decomposition and pattern recognition are discussed. Polynomial approach can be implemented in a parallel processing machine and it can be used to develop a standard algebra-based programming language for image processing and pattern recognition.