Detecting and classifying global dermoscopic patterns are crucial steps for detecting melanocytic lesions from
non-melanocytic ones. An important stage of melanoma diagnosis uses pattern analysis methods such as 7-point
check list, Menzies method etc. In this paper, we present a novel approach to investigate texture analysis and
classification of 5 classes of global lesion patterns (reticular, globular, cobblestone, homogeneous, and parallel
pattern) in dermoscopic images. Our statistical approach models the texture by the joint probability distribution
of filter responses using a comprehensive set of the state of the art filter banks. This distribution is represented
by the frequency histogram of filter response cluster centers called textons. We have also examined other two
methods: Joint Distribution of Intensities (JDI) and Convolutional Restricted Boltzmann Machine (CRBM) to
learn the pattern specific features to be used for textons. The classification performance is compared over the
Leung and Malik filters (LM), Root Filter Set (RFS), Maximum Response Filters (MR8), Schmid, Laws and
our proposed filter set as well as CRBM and JDI. We analyzed 375 images of the 5 classes of the patterns. Our
experiments show that the joint distribution of color (JDC) in the L*a*b* color space outperforms the other
color spaces with a correct classification rate of 86.8%.
Detecting pigmented network is a crucial step for melanoma diagnosis. In this paper, we present a novel graphbased
pigment network detection method that can find and visualize round structures belonging to the pigment
network. After finding sharp changes of the luminance image by an edge detection function, the resulting binary
image is converted to a graph, and then all cyclic sub-graphs are detected. Theses cycles represent meshes that
belong to the pigment network. Then, we create a new graph of the cyclic structures based on their distance.
According to the density ratio of the new graph of the pigment network, the image is classified as "Absent" or
"Present". Being <i>Present</i> means that a pigment network is detected in the skin lesion. Using this approach, we
achieved an accuracy of 92.6% on five hundred unseen images.
This paper explores a novel approach to interactive user-guided image segmentation, using eyegaze information
as an input. The method includes three steps: 1) eyegaze tracking for providing user input, such as setting
object and background seed pixel selection; 2) an optimization method for image labeling that is constrained
or affected by user input; and 3) linking the two previous steps via a graphical user interface for displaying the
images and other controls to the user and for providing real-time visual feedback of eyegaze and seed locations,
thus enabling the interactive segmentation procedure. We developed a new graphical user interface supported
by an eyegaze tracking monitor to capture the user's eyegaze movement and fixations (as opposed to traditional
mouse moving and clicking). The user simply looks at different parts of the screen to select which image to
segment, to perform foreground and background seed placement and to set optional segmentation parameters.
There is an eyegaze-controlled "zoom" feature for difficult images containing objects with narrow parts, holes
or weak boundaries. The image is then segmented using the random walker image segmentation method. We
performed a pilot study with 7 subjects who segmented synthetic, natural and real medical images. Our results
show that getting used the new interface takes about only 5 minutes. Compared with traditional mouse-based
control, the new eyegaze approach provided a 18.6% speed improvement for more than 90% of images with high
object-background contrast. However, for low contrast and more difficult images it took longer to place seeds
using the eyegaze-based "zoom" to relax the required eyegaze accuracy of seed placement.