To achieve better segmentation of MR images, image restoration is typically used as a preprocessing step, especially for low-quality MR images. Recent studies have demonstrated that dictionary learning methods could achieve promising performance for both image restoration and image segmentation. These methods typically learn paired dictionaries of image patches from different sources and use a common sparse representation to characterize paired image patches, such as low-quality image patches and their corresponding high quality counterparts for the image restoration, and image patches and their corresponding segmentation labels for the image segmentation. Since learning these dictionaries jointly in a unified framework may improve the image restoration and segmentation simultaneously, we propose a coupled dictionary learning method to concurrently learn dictionaries for joint image restoration and image segmentation based on sparse representations in a multi-atlas image segmentation framework. Particularly, three dictionaries, including a dictionary of low quality image patches, a dictionary of high quality image patches, and a dictionary of segmentation label patches, are learned in a unified framework so that the learned dictionaries of image restoration and segmentation can benefit each other. Our method has been evaluated for segmenting the hippocampus in MR T1 images collected with scanners of different magnetic field strengths. The experimental results have demonstrated that our method achieved better image restoration and segmentation performance than state of the art dictionary learning and sparse representation based image restoration and image segmentation methods.
Multi-atlas based image segmentation in conjunction with pattern recognition based label fusion strategies has achieved promising performance in a variety of image segmentation problems, including hippocampus segmentation from MR images. The pattern recognition based label fusion consists of image feature extraction and pattern recognition components. Since the feature extraction component plays an important role in the pattern recognition based label fusion, a variety of feature extraction methods have been proposed to extract image features, including texture features and random projection features. However, these feature extraction methods are not adaptive to different segmentation problems. Following the success of convolutional neural networks in image feature extraction, we propose a feature extraction method based on convolutional neural networks for multi-atlas based image segmentation. The proposed method has been validated based on 135 T1 magnetic resonance imaging (MRI) scans and their hippocampus labels provided by the EADC-ADNI harmonized segmentation protocol. We also compared our method with state-of-the-art pattern recognition based MAIS methods, including Local Label Learning and Random Local Binary Patterns. The experimental results have demonstrated that our method could achieve competitive hippocampus segmentation performance over the alternative methods under comparison.
The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.
Alzheimer's disease (AD) is one of the most frequent forms of dementia and an increasing challenging public health problem. In the last two decades, structural magnetic resonance imaging (MRI) has shown potential in distinguishing patients with Alzheimer's disease and elderly controls (CN). To obtain AD-specific biomarkers, previous research used either statistical testing to find statistically significant different regions between the two clinical groups, or ℓ<sub>1</sub> sparse learning to select isolated features in the image domain. In this paper, we propose a new framework that uses structural MRI to simultaneously distinguish the two clinical groups and find the bio-markers of AD, using a group lasso support vector machine (SVM). The group lasso term (mixed ℓ<sub>1</sub>- ℓ<sub>2</sub> norm) introduces anatomical information from the image domain into the feature domain, such that the resulting set of selected voxels are more meaningful than the ℓ<sub>1</sub> sparse SVM. Because of large inter-structure size variation, we introduce a group specific normalization factor to deal with the structure size bias. Experiments have been performed on a well-designed AD vs. CN dataset<sup>1</sup> to validate our method. Comparing to the ℓ<sub>1</sub> sparse SVM approach, our method achieved better classification performance and a more meaningful biomarker selection. When we vary the training set, the selected regions by our method were more stable than the ℓ<sub>1</sub> sparse SVM. Classification experiments showed that our group normalization lead to higher classification accuracy with fewer selected regions than the non-normalized method. Comparing to the state-of-art AD vs. CN classification methods, our approach not only obtains a high accuracy with the same dataset, but more importantly, we simultaneously find the brain anatomies that are closely related to the disease.
Multi-atlas segmentation method has attracted increasing attention in the field of medical image segmentation. It segments the target image by combining warped atlas labels according to a label fusion strategy, usually based on the intensity information of the target and atlas images. However, it has been demonstrated that image intensity information itself is not discriminative enough for distinguishing different subcortical structures in brain magnetic resonance (MR) images. Recent advance in multi-atlas based segmentation has witnessed success of label fusion methods built on informative image features. The key component in these methods is the image feature extraction. Conventional image feature extraction methods, such as textural feature extraction, are built on manually designed image filters and their performance varies when applied to different segmentation problems. In this paper, we propose a random local binary pattern (RLBP) method to generate image features in a random fashion. Based on RLBP features, we use a local learning strategy to fuse labels in multi-atlas based segmentation. Our method has been validated for segmenting hippocampus from MR images. The experiment results have demonstrated that our method can achieve competitive segmentation performance as the state-of-the-art methods.
Many unsupervised clustering techniques have been adopted for parcellating brain regions of interest into functionally homogeneous subregions based on resting state fMRI data. However, the unsupervised clustering techniques are not able to take advantage of exiting knowledge of the functional neuroanatomy readily available from studies of cytoarchitectonic parcellation or meta-analysis of the literature. In this study, we propose a semi-supervised clustering method for parcellating amygdala into functionally homogeneous subregions based on resting state fMRI data. Particularly, the semi-supervised clustering is implemented under the framework of graph partitioning, and adopts prior information and spatial consistent constraints to obtain a spatially contiguous parcellation result. The graph partitioning problem is solved using an efficient algorithm similar to the well-known weighted kernel k-means algorithm. Our method has been validated for parcellating amygdala into 3 subregions based on resting state fMRI data of 28 subjects. The experiment results have demonstrated that the proposed method is more robust than unsupervised clustering and able to parcellate amygdala into centromedial, laterobasal, and superficial parts with improved functionally homogeneity compared with the cytoarchitectonic parcellation result. The validity of the parcellation results is also supported by distinctive functional and structural connectivity patterns of the subregions and high consistency between coactivation patterns derived from a meta-analysis and functional connectivity patterns of corresponding subregions.
Brain network analysis is a promising tool in studies of the human brain’s functional organization and neuropsychiatric disorders. For neuroimaging data based brain network analysis, network nodes are typically defined as distinct greymatter regions delineated by anatomical brain atlases, random parcellation of the brain space, or image voxels, resulting in brain network nodes with different spatial scales. As precise functional organization of the brain remains unclear, it is challenging to determine a proper spatial scale in practice. The brain network nodes defined anatomically or randomly do not necessarily possess desired properties of ideal brain network nodes, i.e., functional homogeneity within each individual node, functional distinctiveness across different nodes, and functional consistence of the same node across different subjects. To obtain a definition of brain network nodes with the desired properties, a brain parcellation method based on functional information is proposed to achieve a brain parcellation consistent across subjects and highly in agreement with the functional organization of the brain. Particularly, spatially contiguous voxel-wise functional information of the brain fMRI data recursively aggregate according to inter-voxel/region functional affinity from voxel level to coarser scales, resulting in a brain parcellation with a multi-level hierarchy. A trade-off between functional homogeneity and distinctiveness is determined by identifying the hierarchy level with network measures highly consistent across subjects. The proposed method has been validated on resting-state fMRI datasets for functional network analysis, and the results demonstrate that brain networks constructed with 200~500 nodes could achieve the highest inter-subject consistence.
Functional near infrared spectroscopy (fNIRS) is an optical technique measuring hemoglobin oxygenation and
deoxygenation concentrations of the brain cortex with higher temporal resolution than current alternative techniques. The
high temporal resolution enables collecting abundant brain functional information. However, the information collected
by fNIRS is correlated and mixed with a variety of physiological signals. Due to the mixture effect, activation detection
is one of challenges in fNIRS based studies of the brain functional activities. To achieve a better detection of activated
brain regions from the complicated information measures, we present a multi-scale analysis method based on a wavelet
coherence measure. In particular, the paradigm of an experiment is used as the reference signal. The coherence of the
signal with data measured by fNIRS at each channel is calculated and summed up to evaluate the activation level.
Experiments on simulated and real data have demonstrated that the proposed method is efficient and effective to detect
activated brain regions covered by the fNIRS probe.
Regions of interests (ROIs) for defining nodes of brain network are of great importance in brain network analysis of fMRI
data. The ROIs are typically identified using prior anatomical information, seed region based correlation analysis,
clustering analysis, region growing or ICA based methods. In this paper, we propose a novel method to identify subject
specific and functional consistent ROIs for brain network analysis using semi-supervised learning. Specifically, a graph
theory based semi-supervised learning method is adopted to optimize ROIs defined using prior knowledge with a
constraint of local and global functional consistency, yielding subject specific ROIs with enhanced functional connectivity.
Experiments using simulated fMRI data have demonstrated that functional consistent ROIs can be identified effectively
from data with different signal to noise ratios (SNRs). Experiments using resting state fMRI data of 25 normal subjects for
identifying ROIs of the default mode network have demonstrated that the proposed method is capable of identifying
subject specific ROIs with stronger functional connectivity and higher consistency across subjects than existing alternative
techniques, indicating that the proposed method can better identify brain network ROIs with intrinsic functional
In imaging data based brain network analysis, a necessary precursor for constructing meaningful brain networks is to
identify functionally homogeneous regions of interest (ROIs) for defining network nodes. For parcellating the brain
based on resting state fMRI data, normalized cut is one widely used clustering algorithm which groups voxels according
to the similarity of functional signals. Due to low signal to noise ratio (SNR) of resting state fMRI signals, spatial
constraint is often applied to functional similarity measures to generate smooth parcellation. However, improper spatial
constraint might alter the intrinsic functional connectivity pattern, thus yielding biased parcellation results. To achieve
reliable and least biased parcellation of the brain, we propose an optimization method for the spatial constraint to
functional similarity measures in normalized cut based brain parcellation. Particularly, we first identify the space of all
possible spatial constraints that are able to generate smooth parcellation, then find the spatial constraint that leads to the
brain parcellation least biased from the intrinsic function pattern based parcellation, measured by the minimal Ncut value
calculated based on the functional similarity measure of original functional signals. The proposed method has been
applied to the parcellation of medial superior frontal cortex for 20 subjects based on their resting state fMRI data. The
experiment results indicate that our method can generate meaningful parcellation results, consistent with existing
functional anatomy knowledge.
Multi-atlas based segmentation methods have recently attracted much attention in medical image segmentation. The
multi-atlas based segmentation methods typically consist of three steps, including image registration, label propagation,
and label fusion. Most of the recent studies devote to improving the label fusion step and adopt a typical image
registration method for registering atlases to the target image. However, the existing registration methods may become
unstable when poor image quality or high anatomical variance between registered image pairs involved. In this paper, we
propose an iterative image segmentation and registration procedure to simultaneously improve the registration and
segmentation performance in the multi-atlas based segmentation framework. Particularly, a two-channel registration
method is adopted with one channel driven by appearance similarity between the atlas image and the target image and
the other channel optimized by similarity between atlas label and the segmentation of the target image. The image
segmentation is performed by fusing labels of multiple atlases. The validation of our method on hippocampus
segmentation of 30 subjects containing MR images with both 1.5T and 3.0T field strength has demonstrated that our
method can significantly improve the segmentation performance with different fusion strategies and obtain segmentation
results with Dice overlap of 0.892±0.024 for 1.5T images and 0.902±0.022 for 3.0T images to manual segmentations.
For subcortical structure segmentation, multi-atlas based segmentation methods have attracted great interest due to their
competitive performance. Under this framework, using deformation fields generated for registering atlas images to the
target image, labels of the atlases are first propagated to the target image space and further fused somehow to get the
target segmentation. Many label fusion strategies have been proposed and most of them adopt predefined weighting
models which are not necessarily optimal. In this paper, we propose a local label learning (L3) strategy to estimate the
target image's label using statistical machine learning techniques. Specifically, we use Support Vector Machine (SVM)
to learn a classifier for each of the target image voxels using its neighboring voxels in the atlases as a training dataset.
Each training sample has dozens of image features extracted around its neighborhood and these features are optimally
combined by the SVM learning method to classify the target voxel. The key contribution of this method is the
development of a locally specific classifier for each target voxel based on informative texture features. The validation
experiment on 57 MR images has demonstrated that our method generates segmentation results of hippocampal with a
dice overlap of 0.908±0.023 to manual segmentations, statistically significantly better than state-of-the-art segmentation
In functional neuroimaging studies, the inter-subject alignment of functional magnetic resonance imaging (fMRI) data is
a necessary precursor to improve functional consistency across subjects. Traditional structural MRI based registration
methods cannot achieve accurate inter-subject functional consistency in that functional units are not necessarily
consistently located relative to anatomical structures due to functional variability across subjects. Although spatial
smoothing commonly used in fMRI data preprocessing can reduce the inter-subject functional variability, it may blur the
functional signals and thus lose the fine-grained information. In this paper we propose a novel functional signal based
fMRI image registration method which aligns local functional connectivity patterns of different subjects to improve the
inter-subject functional consistency. Particularly, the functional connectivity is measured using Pearson correlation. For
each voxel of an fMRI image, its functional connectivity to every voxel in its local spatial neighborhood, referred to as
its local functional connectivity pattern, is characterized by a rotation and shift invariant representation. Based on this
representation, the spatial registration of two fMRI images is achieved by minimizing the difference between their
corresponding voxels' local functional connectivity patterns using a deformable image registration model. Experiment
results based on simulated fMRI data have demonstrated that the proposed method is more robust and reliable than the
existing fMRI image registration methods, including maximizing functional correlations and minimizing difference of
global connectivity matrices across different subjects. Experiment results based on real resting-state fMRI data have
further demonstrated that the proposed fMRI registration method can statistically significantly improve functional
consistency across subjects.
The functional networks, extracted from fMRI images using independent component analysis, have been demonstrated
informative for distinguishing brain states of cognitive functions and neurological diseases. In this paper, we propose a
novel algorithm for discriminant analysis of functional networks encoded by spatial independent components. The
functional networks of each individual are used as bases for a linear subspace, referred to as a functional connectivity
pattern, which facilitates a comprehensive characterization of temporal signals of fMRI data. The functional connectivity
patterns of different individuals are analyzed on the Grassmann manifold by adopting a principal angle based subspace
distance. In conjunction with a support vector machine classifier, a forward component selection technique is proposed
to select independent components for constructing the most discriminative functional connectivity pattern. The
discriminant analysis method has been applied to an fMRI based schizophrenia study with 31 schizophrenia patients and
31 healthy individuals. The experimental results demonstrate that the proposed method not only achieves a promising
classification performance for distinguishing schizophrenia patients from healthy controls, but also identifies
discriminative functional networks that are informative for schizophrenia diagnosis.
This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.