In this paper an algorithm to carry out the automatic segmentation of bone structures in 3D CT images has been implemented. Automatic segmentation of bone structures is of special interest for radiologists and surgeons to analyze bone diseases or to plan some surgical interventions. This task is very complicated as bones usually present intensities overlapping with those of surrounding tissues. This overlapping is mainly due to the composition of bones and to the presence of some diseases such as Osteoarthritis, Osteoporosis, etc. Moreover, segmentation of bone structures is a very time-consuming task due to the 3D essence of the bones. Usually, this segmentation is implemented manually or with algorithms using simple techniques such as thresholding and thus providing bad results. In this paper gray information and 3D statistical information have been combined to be used as input to a continuous max-flow algorithm. Twenty CT images have been tested and different coefficients have been computed to assess the performance of our implementation. Dice and Sensitivity values above 0.91 and 0.97 respectively were obtained. A comparison with Level Sets and thresholding techniques has been carried out and our results outperformed them in terms of accuracy.
Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.
KEYWORDS: RGB color model, Color image processing, Image segmentation, Image processing, Image filtering, Image enhancement, Digital filtering, 3D image processing, Image quality, Medical image processing
This full-color book begins with a detailed study of the nature of color images-including natural, multispectral, and pseudocolor images-and covers acquisition, quality control, and display of color images, as well as issues of noise and artifacts in color images and segmentation for the detection of regions of interest or objects.
The book is primarily written with the (post-)graduate student in mind, but practicing engineers, researchers, computer scientists, information technologists, medical physicists, and data-processing specialists will also benefit from its depth of information. Those working in diverse areas such as DIP, computer vision, pattern recognition, telecommunications, seismic and geophysical applications, biomedical applications, hospital information systems, remote sensing, mapping, and geomatics may find this book useful in their quest to learn advanced techniques for the analysis of color or multichannel images.
Certain skin diseases are chronic, inflammatory and without cure. However, there are many treatment options that can
clear them for a period of time. Measuring their severity and assessing their extent, is a fundamental issue to determine
the efficacy of the treatment under test. Two of the most important parameters of severity assessment are Erythema
(redness) and Scaliness. Physicians classify these parameters into several grades by visual grading method. In this paper
a color image segmentation and classification algorithm is developed to obtain an assessment of erythema and scaliness
of dermatological lesions. Color digital photographs taken under an acquisition protocol form the database. Difference
between green band and blue band of images in RGB color space shows two modes (healthy skin and lesion) with clear
separation. Otsu's method is applied to this difference in order to isolate the lesion. After the skin disease is segmented,
some color and texture features are calculated and they are the inputs to a Fuzzy-ARTMAP neural network. The neural
network classifies them into the five grades of erythema and the five grades of scaliness. The method has been tested
with 31 images with a success percentage of 83.87 % when the images are classified in erythema, and 77.42 % for
A two-stage method for detecting microcalcifications in mammograms is presented. In the first stage, the determination of the candidates for microcalcifications is performed. For this purpose, a 2-D linear prediction error filter is applied, and for those pixels where the prediction error is larger than a threshold, a statistical measure is calculated to determine whether they are candidates for microcalcifications or not. In the second stage, a feature vector is derived for each candidate, and after a classification step using a support vector machine, the final detection is performed. The algorithm is tested with 40 mammographic images, from Screen Test: The Alberta Program for the Early Detection of Breast Cancer with 50-µm resolution, and the results are evaluated using a free-response receiver operating characteristics curve. Two different analyses are performed: an individual microcalcification detection analysis and a cluster analysis. In the analysis of individual microcalcifications, detection sensitivity values of 0.75 and 0.81 are obtained at 2.6 and 6.2 false positives per image, on the average, respectively. The best performance is characterized by a sensitivity of 0.89, a specificity of 0.99, and a positive predictive value of 0.79. In cluster analysis, a sensitivity value of 0.97 is obtained at 1.77 false positives per image, and a value of 0.90 is achieved at 0.94 false positive per image.
A new method for color image segmentation is proposed. It is based on a novel region-growing technique with a growth tolerance parameter that changes with step size, which depends on the variance of the actual grown region. Contrast is introduced to determine which value of the tolerance parameter is taken, choosing the one that provides the region with the highest contrast in relation to the background. Color and texture information are extracted from the image by means of a novel idea: the construction of a color distance image and a texture energy image. The color distance image is formed by calculating CIEDE2000 distance in the L*a*b* color space. The texture energy image is extracted from some statistical moments. Then, a novel texture-controlled multistep region-growing process is performed for the segmentation. One advantage of the method is that it is not designed to work with a particular kind of images. This method is tested on 80 natural color images of the Corel photo stock collection with excellent results. Numerical evidence of the quality of these results is provided by comparing them with the manual segmentation of five experts and with another color and texture segmentation algorithm.
In this paper, a burn color image segmentation and classification system is proposed. The aim of the system is to separate burn wounds from healthy skin, and to distinguish among the different types of burns (burn depths). Digital color photographs are used as inputs to the system. The system is based on color and texture information, since these are the characteristics observed by physicians in order to form a diagnosis. A perceptually uniform color space (L*u*v*) was used, since Euclidean distances calculated in this space correspond to perceptual color differences. After the burn is segmented, a set of color and texture features is calculated that serves as the input to a Fuzzy-ARTMAP neural network. The neural network classifies burns into three types of burn depths: superficial dermal, deep dermal, and full thickness. Clinical effectiveness of the method was demonstrated on 62 clinical burn wound images, yielding an average classification success rate of 82%.
The purpose of this work is to improve a previous method developed by the authors for the classification of burn wounds into their depths. The inputs of the system are color and texture information, as these are the characteristics observed by physicians in order to give a diagnosis. Our previous work consisted in segmenting the burn wound from the rest of the image and classifying the burn into its depth. In this paper we focus on the classification problem only. We already proposed to use a Fuzzy-ARTMAP neural network (NN). However, we may take advantage of new powerful classification tools such as Support Vector Machines (SVM). We apply the five-folded cross validation scheme to divide the database into training and validating sets. Then, we apply a feature selection method for each classifier, which will give us the set of features that yields the smallest classification error for each classifier. Features used to classify are first-order statistical parameters extracted from the L*, u* and v* color components of the image. The feature selection algorithms used are the Sequential Forward Selection (SFS) and the Sequential Backward Selection (SBS) methods. As data of the problem faced here are not linearly separable, the SVM was trained using some different kernels. The validating process shows that the SVM method, when using a Gaussian kernel of variance 1, outperforms classification results obtained with the rest of the classifiers, yielding an error classification rate of 0.7% whereas the Fuzzy-ARTMAP NN attained 1.6 %.
The diabetic retinopathy is a common disease among diabetic patients that can cause blindness. The number of microaneurysms in an eye fundus indicates the evolution stage of the illness. In this paper, an algorithm to automatically detect microaneurysms in retinal angiograms is proposed. The method has three main steps: preprocessing step, seed detection and a subsequent region-growing algorithm. The preprocessing step consists of a Gaussian high pass filtering followed by a top-hat filtering. The aim of this preprocessing step is to eliminate the vascular tree while enhancing microaneurysms. In the second step, a 2-D adaptive filtering is performed and those pixels where the prediction error is high are considered seeds. After the region growing, only regions that fit certain validation criteria are considered microaneurysms. These criteria are intensity, contrast and shape criteria. Intensity and contrast ones are typical criteria used in region-growing algorithms. To create the shape criterion, we have used the fact that microaneurysms can be modelled as 2D Gaussian functions. During the application of this criterion we pass each grown region through a bank of nine correlators, a 2D Gaussian function and eight linear segments oriented in eight different directions. Then we compare the outputs of this bank and we impose that a region can be a microaneurysm when the maximum peak of correlation is obtained when passing through the Gaussian correlator. In this study we have tested the algorithm with 11 images containing 711 microaneurysms in all and we have obtained a sensitivity of 90,72% for a predictive positive value of 82,35% .
In this paper a color image segmentation algorithm for its application to burn wound images is proposed. It takes into accoutn both color and texture information to perform the segmentation. We used the perceptually uniform CIE L*u*v* color space. Texture information is considered by extracting a small trimming from the part to be segmented. Then this mask is slid along the image and a transformed image is calculated, where each pixel is the sum of Euclidean distances in the L*u*v* color coordinates between all the color values in the mask and the pixels under it. Afterwards the transformed image must be thresholded to obtain the segmented image. The threshold is automatically determined by a modification of Otsu's method. We have tested the algorithm with 30 images, obtaining very good results in most of them.
In this paper a burn color image segmentation and classification algorithm is proposed. The aim of the algorithm is to separate the burn wounds from healthy skin, and the different types of burns (burn depths) among themselves. We use digital color photographs. The system is based on the color and texture information, as these are the characteristics observed by physicians in order to give a diagnosis. We use a perceptually uniform color space (L<SUP>*</SUP>u<SUP>*</SUP>v<SUP>*</SUP>), since Euclidean distances calculated in this space correspond to perceptually color differences. After the burn is segmented, some color and texture descriptors features are calculated and they are the inputs to a Fuzzy-ARTMAP neural network. The neural network classifies them into three types of burns: superficial dermal, depth dermal and full thickness. We get an average classification success rate of 88.89%.