Objective and efficient diagnosis of Alzheimer’s disease (AD) has been a major topic with extensive researches in recent years, and some promising results have been shown for imaging markers using magnetic resonance imaging (MRI) data. Beside conventional machine learning methods, deep learning based methods have been developed in several studies, where layer-by-layer neural network settings were purposed to extract features for disease classification from the patches or whole images. However, as the disease develops from subcortical nuclei to cortical regions, specific brain regions with morphological changes might contribute to the diagnosis of disease progress. Therefore, we propose a novel spatial and depth weighted neural network structure to extract effective features, and further improve the performance of AD diagnosis. Specifically, we first use group comparison to detect the most distinctive AD-related landmarks, and then sample landmark-based image patches as our training data. In the model structure, with a 15-layer DenseNet as backbone, we introduce a attention bypass to estimate the spatial weights in the image space to guide the network to focus on specific regions. A squeeze-and-excitation (SE) mechanism is also adopted to further weight the feature map channels. We used 2335 subjects from public datasets (i.e., ADNI-1, ADNI-2 and ADNI-GO) for experiment and results show that our framework achieves 90.02% accuracy, 81.25% sensitivity, and 96.33% specificity in diagnosis AD patients from normal controls.
Measurement of total kidney volume (TKV) plays an important role in the early therapeutic stage of autosomal dominant polycystic kidney disease (ADPKD). As a crucial biomarker, an accurate TKV can sensitively reflect the disease progression and be used as an indicator to evaluate the curative effect of the drug. However, manual contouring of kidneys in magnetic resonance (MR) images is time-consuming (40 minutes), which greatly hinders the wide adoption of TKV in clinic. In this paper, we propose a multi-resolution 3D convolutional neural network to automatically segment kidneys of ADPKD patients from MR images. We adopt two resolutions and use a customized V-Net model for both resolutions. The V-Net model is able to integrate both high-level context information with detailed local information for accurate organ segmentation. The V-Net model in the coarse resolution can robustly localize the kidneys, while the VNet model in the fine resolution can accurately refine the kidney boundaries. Validated on 305 subjects with different loss functions and network architectures, our method can achieve over 95% Dice similarity coefficient with the groundtruth labeled by a senior physician. Moreover, the proposed method can dramatically reduce the measurement of kidney volume from 40 minutes to about 1 second, which can greatly accelerate the disease staging of ADPKD patients for large clinical trials, promote the development of related drugs, and reduce the burden of physicians.
Accurate segmentation of organs at risk (OARs) is a key step in image guided radiation therapy. In recent years, deep learning based methods have been widely used in medical image segmentation. Among them, U-Net and V-Net are the most popular ones. In this paper, we evaluate a customized V-Net on 16 OARs throughout the body using a large CT dataset. Specifically, two customizations are used to reduce the GPU memory cost of V-Net: 1) multi-resolution V-Nets, where the coarse-resolution V-Net aims to localize the OAR in the entire image space, while the fine-resolution V-Net focuses on refining detailed boundaries of OAR; 2) a modified V-Net architecture, which is specifically designed for segmenting large organs, e.g., liver. Validated on 3483 CT scans of various imaging and disease conditions, we show that, compared with traditional methods, the customized V-Net wins in speed (0.7 second vs 20 seconds per organ), accuracy (average Dice score 96.6% vs 84.3%), and robustness (98.6% successful rate vs 83.3% successful rate). Moreover, the customized V-Net is very robust against various image artifacts, diseases and slice thicknesses, and has much better performance even on the organs with large shape variations (e.g., the bladder) than traditional methods.