Amyotrophic Lateral Sclerosis (ALS) is a neurological disease that causes death of neurons controlling muscle movements. Loss of speech and swallowing functions is a major impact due to degeneration of the tongue muscles. In speech studies using magnetic resonance (MR) techniques, diffusion tensor imaging (DTI) is used to capture internal tongue muscle fiber structures in three-dimensions (3D) in a non-invasive manner. Tagged magnetic resonance images (tMRI) are used to record tongue motion during speech. In this work, we aim to combine information obtained with both MR imaging techniques to compare the functionality characteristics of the tongue between normal and ALS subjects. We first extracted 3D motion of the tongue using tMRI from fourteen normal subjects in speech. The estimated motion sequences were then warped using diffeomorphic registration into the b0 spaces of the DTI data of two normal subjects and an ALS patient. We then constructed motion atlases by averaging all warped motion fields in each b0 space, and computed strain in the line of action along the muscle fiber directions provided by tractography. Strain in line with the fiber directions provides a quantitative map of the potential active region of the tongue during speech. Comparison between normal and ALS subjects explores the changing volume of compressing tongue tissues in speech facing the situation of muscle degradation. The proposed framework provides for the first time a dynamic map of contracting fibers in ALS speech patterns, and has the potential to provide more insight into the detrimental effects of ALS on speech.
Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand
tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of
the tongue’s shape and motion of a population carrying out one of these functions it is desirable to build a statistical
model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of
tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR
images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged
MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a
phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using
the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is
created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated
by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high
correspondence. The proposed method provides a quantitative representation to observe the commonality and variability
of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and
other tensors based on motion fields.
The human tongue is composed of multiple internal muscles that work collaboratively during the production of speech. Assessment of muscle mechanics can help understand the creation of tongue motion, interpret clinical observations, and predict surgical outcomes. Although various methods have been proposed for computing the tongue’s motion, associating motion with muscle activity in an interdigitated fiber framework has not been studied. In this work, we aim to develop a method that reveals different tongue muscles’ activities in different time phases during speech. We use fourdimensional tagged magnetic resonance (MR) images and static high-resolution MR images to obtain tongue motion and muscle anatomy, respectively. Then we compute strain tensors and local tissue compression along the muscle fiber directions in order to reveal their shortening pattern. This process relies on the support from multiple image analysis methods, including super-resolution volume reconstruction from MR image slices, segmentation of internal muscles, tracking the incompressible motion of tissue points using tagged images, propagation of muscle fiber directions over time, and calculation of strain in the line of action, etc. We evaluated the method on a control subject and two postglossectomy patients in a controlled speech task. The normal subject’s tongue muscle activity shows high correspondence with the production of speech in different time instants, while both patients’ muscle activities show different patterns from the control due to their resected tongues. This method shows potential for relating overall tongue motion to particular muscle activity, which may provide novel information for future clinical and scientific studies.
Image labeling is an essential step for quantitative analysis of medical images. Many image labeling algorithms require
seed identification in order to initialize segmentation algorithms such as region growing, graph cuts, and the random
walker. Seeds are usually placed manually by human raters, which makes these algorithms semi-automatic and can be
prohibitive for very large datasets. In this paper an automatic algorithm for placing seeds using multi-atlas registration
and statistical fusion is proposed. Atlases containing the centers of mass of a collection of neuroanatomical objects are
deformably registered in a training set to determine where these centers of mass go after labels transformed by
registration. The biases of these transformations are determined and incorporated in a continuous form of Simultaneous
Truth And Performance Level Estimation (STAPLE) fusion, thereby improving the estimates (on average) over a single
registration strategy that does not incorporate bias or fusion. We evaluate this technique using real 3D brain MR image
atlases and demonstrate its efficacy on correcting the data bias and reducing the fusion error.
Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels
can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for
labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And
Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and
simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A
generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show
that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent
likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually
indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include
<i>a priori</i> probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both
the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in
simulations and also through a human rater experiment involving the identification the intersection points of the right
ventricle to the left ventricle in CINE cardiac data.