This paper presents a novel approach in computer aided skin lesion segmentation of dermoscopic images. We
apply spatial and color features in order to model the lesion growth pattern. The decomposition is done by
repeatedly clustering pixels into dark and light sub-clusters. A novel tree structure based representation of the
lesion growth pattern is constructed by matching every pixel sub-cluster with a node in the tree structure. This
model provides a powerful framework to extract features and to train models for lesion segmentation. The model
employed allows features to be extracted at multiple layers of the tree structure, enabling a more descriptive
feature set. Additionally, there is no need for preprocessing such as color calibration or artifact disocclusion.
Preliminary features (mean over RGB color channels) are extracted for every pixel over four layers of the growth
pattern model and are used in association with radial distance as a spatial feature to segment the lesion. The
resulting per pixel feature vectors of length 13 are used in a supervised learning model for estimating parameters
and segmenting the lesion. A dataset containing 116 challenging images from dermoscopic atlases is used to
validate the method via a 10-fold cross validation procedure. Results of segmentation are compared with six
other skin lesion segmentation methods. Our method outperforms ve other methods and performs competitively
with another method. We achieve a per-pixel sensitivity/specicity of 0.890 and 0.901 respectively.
Inpainting, a technique originally used to restore film and photographs, is used to disocclude hair from dermascopic images of skin lesions. The technique is compared to the conventional software DullRazor, which uses linear interpolation to perform disocclusion. Comparison was performed by simulating occluding hair on a dermascopic image, applying DullRazor and inpainting and calculating the error induced. Inpainting is found to perform approximately 33% better than DullRazor's linear interpolation, and is more stable under heavy occlusion. The results are also compared to published results from two other alternatives: auto-regressive (AR) model signal extrapolation and band-limited (BL) signal interpolation.
Texture is known to predict atypicality in pigmented skin lesions. This paper describes an experiment that was
conducted to determine 1) if this textural information is present in the center of skin lesions, and 2) how color
affects the perception of this information. Images of pigmented skin lesions from three categories were shown
to subjects in such a way that only textural information could be perceived; other factors known to predict
atypicality were removed or held constant. These images were shown in both color and grayscale. Each subject
assigned a score of atypicality to each image.
The experiment was conducted on 5 subjects of varying backgrounds, including one expert. Each subject's
accuracy under each modality was measured by calculating the volume under a 3-way ROC surface. The
modalities were compared using the Dorfman-Berbaum-Metz (DBM) method of ROC analysis, giving a p-value
of 0.8611. Therefore the null hypothesis that there is no difference between the predictive power of the modalities
cannot be rejected. Also, a two one-sided test of equivalence (TOST) was performed giving a p-value pair of
< 0.01; strong evidence that the textural information is independent of color.
Additionally, the subjects' accuracies were compared to a set of random readers using the DBM and TOST
methods. This was done for accuracies under the color modality, the grayscale modality and both modalities
simultaneously. The results (all p-values < 0.001) confirm the existence of textural information predictive of
atypia in the center of pigmented skin lesions.