Alzheimer’s Disease (AD) which causes declination of cognitive function is one of the most severe social issues in the world. It has already been known that AD cannot be cured and treatment can only delay its progression. Therefore, it is very important to detect AD in early stage and prevent it to be worse. Furthermore, sooner the progression is detected, better the prognosis will be. In this research, we developed a novel multi-modal deep learning method to predict conversion from Mild Cognitive Impairment (MCI), which is the stage between cognitively normal older people and AD. In our method, the multi-modal input data are defined as structural Magnetic Resonance Imaging (MRI) images and clinical data including several cognitive scores, APOE genotype, gender and age obtained from Alzheimer’s Disease Neuroimaging Initiative cohort (ADNI). Our criteria of selecting these input data are that they are mostly obtained by non-invasive examination. The proposed method integrates features obtained from MRI images and clinical data effectively by using bi-linear fusion. Bi-linear fusion computes the products of all elements between image and clinical features, where the correlation between them are included. That led to a big improvement of prediction accuracy in the experiment. The prediction model using bi-linear fusion achieved to predict conversion in one year with 0.86 accuracy, comparing with 0.76 accuracy using linear fusion. The proposed method is useful for screening examination for AD or deciding a stratification approach within clinical trials since it achieved a high accuracy while the input data is relatively easy to be obtained.
Evaluating size of hyperacute stroke lesions speedily is an essential procedure before physicians make treatment decisions. For a patient with brain stroke suspicion, noncontrast computerized tomography (NCCT) is firstly taken for initial infarction assessment. However, in a lot of cases, because CT hypoattenuation and texture variation caused by hyperacute ischemia are subtle, besides local intensities and texture, physicians usually compare the difference between right and left sides based on the symmetric characteristic of brain anatomy not to miss the subtle lesions. In this paper, we propose a novel 3D U-Net architecture that integrates the comparison knowledge to automatically segment hyperacute stroke lesions on NCCT. To effectively capture right and left comparison features, we introduced a horizontal flip operation into 3D UNet. We also applied gradient-based sensitivity map method to our trained model in order to visualize how much each voxel contributes to segmentation results. Experimental results showed that the proposed architecture improved segmentation accuracy. Dice similarity coefficient (DSC) was improved from 0.44 to 0.54. Sensitivity and specificity was also improved from 0.80 to 1.00 and from 0.90 to 0.98 respectively. Sensitivity maps derived from our trained model demonstrated that both the right and left sides were utilized more effectively to successfully segment ischemic lesions.
Recent advances in MDCT have improved the quality of 3D images. Virtual Bronchoscopy has been used before and
during the bronchoscopic examination for the biopsy. However, Virtual Bronchoscopy has become widely used only for the examination of proximal airway diseases. The reason is that conventional airway extraction methods often fail to extract peripheral airways with low image contrast. In this paper, we propose a machine learning based method which can improve the extraction robustness remarkably. The method consists of 4 steps. In the first step, we use Hessian analysis to detect as many airway candidates as possible. In the second, false positives are reduced effectively by introducing a machine learning method. In the third, an airway tree is constructed from the airway candidates by utilizing a minimum spanning tree algorithm. In the fourth, we extract airway regions by using Graph cuts. Experimental results evaluated by a standardized evaluation framework show that our method can extract peripheral airways very well.
The spinal column is one of the most important anatomical structures in the human body and its centerline, that is, the
centerline of vertebral bodies, is a very important feature used by many applications in medical image processing. In the
past, some approaches have been proposed to extract the centerline of spinal column by using edge or region information
of vertebral bodies. However, those approaches may suffer from difficulties in edge detection or region segmentation of
vertebral bodies when there exist vertebral diseases such as osteoporosis, compression fracture. In this paper, we propose
a novel approach based on machine learning to robustly extract the centerline of the spinal column from threedimensional
CT data. Our approach first applies a machine learning algorithm, called AdaBoost, to detect vertebral cord
regions, which have a S-shape similar to and close to, but can be detected more stably than, the spinal column. Then a
centerline of detected vertebral cord regions is obtained by fitting a spline curve to their central points, using the
associated AdaBoost scores as weights. Finally, the obtained centerline of vertebral cord is linearly deformed and
translated in the sagittal direction to fit the top and bottom boundaries of the vertebral bodies and then a centerline of the
spinal column is obtained. Experimental results on a large CT data set show the effectiveness of our approach.
Body part recognition based on CT slice images is very important for many applications in PACS and CAD systems. In
this paper, we propose a novel approach that can recognize which body part a slice image belongs to robustly. We focus
on how to effectively express and use the unique statistical information of the correlation between the CT value and the
position information of each body part. We apply the machine learning method AdaBoost to express and use this
statistical information. Our approach consists of a training process and a recognition process. In the training process, we
first define the whole body using a set of specific classes to ensure that training images in the same class have a high
similarity, and prepare a training image set (positive samples and negative samples) for each class. Second, the training
images are normalized to a fixed size and rotation in each class. Third, features are calculated for each normalized
training image. Finally, AdaBoosted histogram classifiers are trained. After the training process, each class has its own
classifiers. In the recognition process, given a series of CT images, the scores of all classes for each slice image are calculated based on the classifiers obtained in the training process. Then, based on the scores of each slice and a simple model of body part sequence continuity, we use dynamic programming (DP) to eliminate false recognition results. Experimental results on 440 unknown series including lesions show that our approach has high a recognition rate.
In this paper, we propose a novel machine learning approach for interactive lesion segmentation on CT and MRI images.
Our approach consists of training process and segmenting process. In training process, we train AdaBoosted histogram
classifiers to classify true boundary positions and false ones on the 1-D intensity profiles of lesion regions. In segmenting
process, given a marker indicating a rough location of a lesion, the proposed solution segments its region automatically
by using the trained AdaBoosted histogram classifiers. If there are imperfects in the segmented result, based on one
correct location designated by the user, the solution does the segmentation again and gives a new satisfied result. There
are two novelties in our approach. The first is that we use AdaBoost in the training process to learn diverse intensity
distributions of lesion regions, and utilize the trained classifiers successfully in segmenting process. The second is that
we present a reliable and user-friendly way in segmenting process to rectify the segmented result interactively. Dynamic
programming is used to find a new optimal path. Experimental results show our approach can segment lesion regions
successfully, despite the diverse intensity distributions of the lesion regions, marker location variability and lesion region
shape variability. Our framework is also generic and can be applied for blob-like target segmentation with diverse
intensity distributions in other applications.