Limited resolution, blurring, warping, and additive noise associated with image acquisition and storage often make the barcode image degraded, generating low-resolution (LR) barcode images. Barcodes in degraded LR images are difficult to recognize. The goal of this paper is to introduce the potential of super-resolution (SR) technique in conquering the aforementioned challenges, and a variational Bayesian SR method is proposed in this work. Different from the previous work, here, the high-resolution barcode image is estimated through its corresponding posterior probability distribution. The barcode image is made of some homogeneous regions separated by sharp edges, and sometimes it is anisotropic. A universal prior probability distribution was proposed for the barcode image by considering these characteristics. Mathematically, the efficiency of this prior distribution is demonstrated, which can preserve sharp edges and suppress artifacts in the reconstructed barcode images. Moreover, by using the variational Bayesian framework, the motion parameters and hyperparameters can be estimated accurately and efficiently, ensuring the success of the SR technique. In order to overcome the difficulty caused by nonlinearity, the Taylor expansion method is introduced to solve the proposed SR problem. Eventually, the simulated and real data experiments show the encouraging performance of the proposed SR method. It increases certainly the reconstruction quality, and could be considerably robust against blur and noise. It is believed that the variational SR technique in the barcode auto-identification technique should open a further perspective of coping with technology challenge.
Quantization operation in the imaging process rounds the measured values of intensities into integers. It neglects the possible differences between the intensities having the same rounded numbers, and therefore lowers the grayscale resolution of images. Although some methods have been proposed for the reconstruction of high-grayscale resolution images from multiple subpixel-shifted low-spatial-resolution and low-grayscale-resolution images, the problems of nonsmooth transition within regions and insufficient intensity levels still exist. A grayscale superresolution method based on fill light and a photographing apparatus are proposed to deal with the problems. The photographing apparatus can add fill lights with slightly different intensities to the captured images without changing the brightness of scenes. Our reconstruction method is based on the method of estimating a float number from several rounded integers. Then, a high-grayscale-resolution image is reconstructed from multiple low-grayscale-resolution images with slightly different intensity fill lights. Simulated data and real-world data have been used for the evaluation of the method, and the experimental results show that our method effectively improves the grayscale resolution. Besides, our method is convenient for a graphics processing unit implementation.
We propose an innovative and efficient approach to improve K-view-template (K-view-T) and K-view-datagram (K-view-D) algorithms for image texture classification. The proposed approach, called the weighted K-view-voting algorithm (K-view-V), uses a novel voting method for texture classification and an accelerating method based on the efficient summed square image (SSI) scheme as well as fast Fourier transform (FFT) to enable overall faster processing. Decision making, which assigns a pixel to a texture class, occurs by using our weighted voting method among the "promising" members in the neighborhood of a classified pixel. In other words, this neighborhood consists of all the views, and each view has a classified pixel in its territory. Experimental results on benchmark images, which are randomly taken from Brodatz Gallery and natural and medical images, show that this new classification algorithm gives higher classification accuracy than existing K-view algorithms. In particular, it improves the accurate classification of pixels near the texture boundary. In addition, the proposed acceleration method improves the processing speed of K-view-V as it requires much less computation time than other K-view algorithms. Compared with the results of earlier developed K-view algorithms and the gray level co-occurrence matrix (GLCM), the proposed algorithm is more robust, faster, and more accurate.
Although various image denoising methods such as PDE-based algorithms have made remarkable progress in the past years, the trade-off between noise reduction and edge preservation is still an interesting and difficult problem in the field of image processing and analysis. A new image denoising algorithm, using a modified PDE model based on pixel similarity, is proposed to deal with the problem. The pixel similarity measures the similarity between two pixels. Then the neighboring consistency of the center pixel can be calculated. Informally, if a pixel is not consistent enough with its surrounding pixels, it can be considered as a noise, but an extremely strong inconsistency suggests an edge. The pixel similarity is a probability measure, its value is between 0 and 1. According to the neighboring consistency of the pixel, a diffusion control factor can be determined by a simple thresholding rule. The factor is combined into the primary partial differential equation as an adjusting factor for controlling the speed of diffusion for different type of pixels. An evaluation of the proposed algorithm on the simulated brain MRI images was carried out. The initial experimental results showed that the new algorithm can smooth the MRI images better while keeping the edges better and achieve higher peak signal to noise ratio (PSNR) comparing with several existing denoising algorithms.
In clinic, surrounding density of breast abnormalities is an important cue for radiologists to distinguish between benign
and malignant abnormalities on mammogram. It may also be an important feature to be used in computer-aided diagnosis
(CAD) system. The purpose of our work is to analyze the density distribution surrounding benign or malignant mass.
The cases used in this study are selected from the Digital Database for Screening Mammography (DDSM) provided by
the University of South Florida. For each case, the mass boundaries marked by experienced radiologists are used and 30
3-pixel-wide bands, one outside another, surrounding each mass are considered. A few density features including the
average gray level and the distribution skewness of the gray levels on every surrounding band were calculated. For every
feature in each corresponding band, average values were calculated for 10 benign cases and 10 malignant cases,
respectively. The preliminary analysis results show that the intensities surrounding benign masses tend to be higher than
those surrounding malignant masses. They also show that the standard deviation of intensities surrounding benign
masses tend to be larger than those surrounding malignant masses. Similar analysis was also carried out with mass
boundaries automatically identified by computer and the results corroborate the analysis with mass boundaries marked
A method for computer-aided detection (CAD) of mammographic masses is proposed and a prototype CAD system is
presented. The method is based on content-based image retrieval (CBIR). A mammogram database containing 2000
mammographic regions is built in our prototype CBIR-CAD system. Every region of interested (ROI) in the database has
known pathology. Specifically, there are 583 ROIs depicting biopsy-proven masses, and the rest 1417 ROIs are normal.
Whenever a suspicious ROI is detected in a mammogram by a radiologist, it can be submitted as a query to this CBIRCAD
system. As the query results, a series of similar ROI images together with their known pathology knowledge will
be retrieved from the database and displayed in the screen in descending order of their similarities to the query ROI to
help the radiologist to make the diagnosis decision. Furthermore, our CBIR-CAD system will output a decision index
(DI) to quantitatively indicate the probability that the query ROI contains a mass. The DI is calculated by the query
matches. In the querying process, 24 features are extracted from each ROI to form a 24-dimensional vector. Euclidean
distance in the 24-dimensional feature vector space is applied to measure the similarities between ROIs. The prototype
CBIR-CAD system is evaluated based on the leave-one-out sampling scheme. The experiment results showed that the
system can achieve a receiver operating characteristic (ROC) area index AZ =0.84 for detection of mammographic
masses, which is better than the best results achieved by the other known mass CAD systems.
We applied a new texture segmentation algorithm to improve the segmentation of boundary areas for distinction on the
liver needle biopsy images taken from microscopes for automatic assessment of liver fibrosis severity. It was difficult to
gain satisfactory segmentation results on the boundary areas of textures with some of existing texture segmentation
algorithms in our preliminary experiments. The proposed algorithm consists of three steps. The first step is to apply the
K-View-datagram segmentation method to the image. The second step is to find a boundary set which is defined as a set
including all the pixels with more than half of its neighboring pixels being classified into clusters other than that of itself
by the K-View-datagram method. The third step is to apply a modified K-view template method with a small scanning
window to the boundary set to refine the segmentation. The algorithm was applied to the real liver needle biopsy images
provided by the hospitals in Wuhan, China. Initial experimental results show that this new segmentation algorithm gives
high segmentation accuracy and classifies the boundary areas better than the existing algorithms. It is a useful tool for
automatic assessment of liver fibrosis severity.
Medical images such as MRI images normally have smooth edges and round corners, but the existing general edge-preserving smoothing algorithms often result in coarse edges and sharp corners. A new image smoothing algorithm, based on a filter utilizing the weighted average of 21 average pixel values in 21 neighborhood subregions in a 3x3 square around the center pixel, is proposed to deal with the problem. Subregions with the center point being smoothed on its sharp corner are avoided from selection to make the edges being preserved smooth and round. During the smoothing process, each subregion is assigned a weight according to its homogeneousness which is evaluated by the variance of its pixel values. The weighted average of averages for all subregions is assigned to the center pixel being smoothed. The more homogeneous neighborhood has more influence on the center pixel, so that the edges can be well preserved. But contributions from all neighborhoods are also taken into consideration especially when their homogeneousness is largely equal so that the resultant areas are smoother. An evaluation of the algorithm on the simulated MRI images is carried out. Experimental results showed that the new algorithm can smooth the MRI images better while keeping the edges better compared with the existing smoothing algorithms.