In this paper we propose to use the Wavelet Leader (WL) transformation for studying trabecular bone patterns. Given an input image, its WL transformation is defined as the cross-channel-layer maximum pooling of an underlying wavelet transformation. WL inherits the advantage of the original wavelet transformation in capturing spatial-frequency statistics of texture images, while being more robust against scale and orientation thanks to the maximum pooling strategy. These properties make WL an attractive alternative to replace wavelet transformations which are used for trabecular analysis in previous studies. In particular, in this paper, after extracting wavelet leader descriptors from a trabecular texture patch, we feed them into two existing statistic texture characterization methods, namely the Gray Level Co-occurrence Matrix (GLCM) and the Gray Level Run Length Matrix (GLRLM). The most discriminative features, Energy of GLCM and Gray Level Non-Uniformity of GLRLM, are retained to distinguish two different populations between osteoporotic patients and control subjects. Receiver Operating Characteristics (ROC) curves are used to measure performance of classification. Experimental results on a recently released benchmark dataset show that WL significantly boosts the performance of baseline wavelet transformations by 5% in average.
Osteoporosis is the common cause for a broken bone among senior citizens. Early diagnosis of osteoporosis requires routine examination which may be costly for patients. A potential low cost diagnosis is to identify a senior citizen at high risk of osteoporosis by pre-screening during routine dental examination. Therefore, osteoporosis analysis using dental radiographs severs as a key step in routine dental examination. The aim of this study is to localize landmarks in dental radiographs which are helpful to assess the evidence of osteoporosis. We collect eight landmarks which are critical in osteoporosis analysis. Our goal is to localize these landmarks automatically for a given dental radiographic image. To address the challenges such as large variations of appearances in subjects, in this paper, we formulate the task into a multi-class classification problem. A hybrid feature pool is used to represent these landmarks. For the discriminative classification problem, we use a random forest to fuse the hybrid feature representation. In the experiments, we also evaluate the performances of individual feature component and the hybrid fused feature. Our proposed method achieves average detection error of 2:9mm.
Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management
and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data,
a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches
recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature
extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a
small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the
Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature
extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery.
Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our
proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.
The apical root regions play an important role in analysis and diagnosis of many oral diseases. Automatic
detection of such regions is consequently the first step toward computer-aided diagnosis of these diseases.
In this paper we propose an automatic method for periapical root region detection by using the state-of-theart
machine learning approaches. Specifically, we have adapted the AdaBoost classifier for apical root
detection. One challenge in the task is the lack of training cases especially for diseased ones. To handle this
problem, we boost the training set by including more root regions that are close to the annotated ones and
decompose the original images to randomly generate negative samples. Based on these training samples,
the Adaboost algorithm in combination with Haar wavelets is utilized in this task to train an apical root
detector. The learned detector usually generates a large amount of true and false positives. In order to
reduce the number of false positives, a confidence score for each candidate detection result is calculated for
further purification. We first merge the detected regions by combining tightly overlapped detected
candidate regions and then we use the confidence scores from the Adaboost detector to eliminate the false
positives. The proposed method is evaluated on a dataset containing 39 annotated digitized oral X-Ray
images from 21 patients. The experimental results show that our approach can achieve promising detection
Periapical lesion is a common disease in oral health. While many studies have been devoted to image-based
diagnosis of periapical lesion, these studies usually require clinicians to perform the task. In this paper we
investigate the automatic solutions toward periapical lesion classification using quantized texture analysis.
Specifically, we adapt the bag-of-visual-words model for periapical root image representation, which
captures the texture information by collecting local patch statistics. Then we investigate several similarity
measure approaches with the K-nearest neighbor (KNN) classifier for the diagnosis task. To evaluate these
classifiers we have collected a digitized oral X-Ray image dataset from 21 patients, resulting 139 root
images in total. The extensive experimental results demonstrate that the KNN classifier based on the bagof-
words model can achieve very promising performance for periapical lesion classification.
This work is a part of our ongoing study aimed at comparing the topology of anatomical branching structures with the
underlying image texture. Detection of regions of interest (ROIs) in clinical breast images serves as the first step in
development of an automated system for image analysis and breast cancer diagnosis. In this paper, we have investigated
machine learning approaches for the task of identifying ROIs with visible breast ductal trees in a given galactographic
image. Specifically, we have developed boosting based framework using the AdaBoost algorithm in combination with
Haar wavelet features for the ROI detection. Twenty-eight clinical galactograms with expert annotated ROIs were used
for training. Positive samples were generated by resampling near the annotated ROIs, and negative samples were
generated randomly by image decomposition. Each detected ROI candidate was given a confidences core. Candidate
ROIs with spatial overlap were merged and their confidence scores combined. We have compared three strategies for
elimination of false positives. The strategies differed in their approach to combining confidence scores by summation,
averaging, or selecting the maximum score.. The strategies were compared based upon the spatial overlap with
annotated ROIs. Using a 4-fold cross-validation with the annotated clinical galactographic images, the summation
strategy showed the best performance with 75% detection rate. When combining the top two candidates, the selection of
maximum score showed the best performance with 96% detection rate.