A learning-based approach integrating the use of pixel level statistical modeling and spiculation detection is
presented for the segmentation of mammographic masses with ill-defined margins and spiculations. The algorithm
involves a multi-phase pixel-level classification, using a comprehensive group of regional features, to generate a
pixel level mass-conditional probability map (PM). Then, mass candidate along with background clutters are
extracted from the PM by integrating the prior knowledge of shape and location of masses. A multi-scale steerable
ridge detection algorithm is employed to detect spiculations. Finally, all the object level findings, including mass
candidate, detected spiculations, and clutters, along with the PM are integrated by graph cuts to generate the
final segmentation mask. The method was tested on 54 masses (51 malignant and 3 benign), all with ill-defined
margins and irregular shape or spiculations. The ground truth delineations were provided by five experienced
radiologists. Area overlap ratio of 0.766 (±0.144) and 0.642 (±0.173) were obtained for segmenting the whole mass
and only the margin portion, respectively. Williams index of area and contour based measurements indicated
that segmentation results of the algorithm well agreed with the radiologists' delineation. Most importantly, the
proposed approach is capable of including mass margin and its extension which are considered as key features
for breast lesion analyses.
In this study, we present a clinically guided technical method for content-based categorization of mammographic masses.
Our work is motivated by the continuing effort in content-based image annotation and retrieval to extract and model the
semantic content of images. Specifically, we classified the shape and margin of mammographic mass into different
categories, which are designated by radiologists according to descriptors from Breast Imaging Reporting and Data
System Atlas (BI-RADS). Experiments were conducted within subsets selected from datasets consisting of 346 masses.
In the experiments that categorize lesion shape, we obtained a precision of 70% with three classes and 87.4% with two
classes. In the experiments that categorize margin, we obtained precisions of 69.4% and 74.7% for the use of four and
three classes, respectively. In this study, we intend to demonstrate that this classification based method is applicable in
extracting the semantic characteristics of mass appearances, and thus has the potential to be used for automatic
categorization and retrieval tasks in clinical applications.
To accommodate the inter- and intra-fractional motion of internal organs in prostate cancer treatment, a large
margin (5mm-25mm) has often to be considered during radiation therapy planning. Normally, the inter-fractional
motion is more substantial than the intra-fractional counterpart. Therefore, the study of inter-fractional motion
pattern is of special interest for adaptive radiation therapy. Existing methods on organ motion analysis mainly
focus on the deviation of an organ's shape from its mean shape. The deviation information is helpful in choosing a
statistically proper margin, but is of limited use for plan adaptation. In this paper, we propose a new deformation
analysis method that can be directly used for plan adaptation. First, deformation estimation is accomplished by
a fast deformable registration method, which utilizes a contour based multi-grid strategy to register treatment
cone-beam CT (CBCT) images with planning CT images. Second, dominant deformation modes are extracted
by a novel deformation analysis approach. To be specific, a cooperative principal component analysis (PCA)
method is developed to analyze the deformation field in a coarse-to-fine strategy. The deformation modes are
initialized by applying PCA on the organs as a whole and refined by analyzing the individual organs subsequently.
The experimental results show that the organ motion can be well characterized by a few dominant deformation
modes. Based on the dominant modes, a corresponding set of dominant modal plans could be generated for
further optimization. Ultimately, an adaptive plan for each treatment can be obtained on-line while the margin
can be effectively reduced to minimize the unnecessary radiation dosage.
The purpose of this study is to develop a Content-Based Image Retrieval (CBIR) system for mammographic computer-aided
diagnosis. We have investigated the potential of using shape, texture, and intensity features to categorize masses
that may lead to sorting similar image patterns in order to facilitate clinical viewing of mammographic masses.
Experiments were conducted within a database that contains 243 masses (122 benign and 121 malignant). The retrieval
performances using the individual feature was evaluated, and the best precision was determined to be 79.9% when using
the curvature scale space descriptor (CSSD). By combining several selected shape features for retrieval, the precision
was found to improve to 81.4%. By combining the shape, texture, and intensity features together, the precision was
found to improve to 82.3%.
Based on high performance fast tunable phase retarder and novel algorithm, an innovative polarization imaging solution
is proposed. It allows very fast recording the polarization images at the speed limit of a CCD. It contains no moving
parts and can accommodate to most of the existing CCD cameras. The unique measurement procedure allows efficient,
accurate sensing of the polarization imaging. A computer-aided diagnosis software has been developed for the proposed
polarization imaging system.
In recent years there has been an increasing interest in studying the propagation of polarized light in randomly scattering media. This paper presents a novel approach for cell and tissue imaging by using full Stokes imaging and for its improved diagnostics by using artificial neural networks (ANNs). Phantom experiments have been conducted using a prototyped Stokes polarization imaging device. Several types of phantoms, consisting of polystyrene latex spheres in various diameters, were prepared to simulate different conditions of epidermal layer of skin. Several sets of four images that contain not only the intensity, but also the polarization information were taken for analysis. Wavelet transforms are first applied to the Stokes components for initial feature analysis and extraction. Artificial neural networks (ANNs) are then used to extract diagnostic features for improved classification and prediction. The experimental results show that the classification performance using Stokes images is significantly improved over that using the intensity image only.
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has emerged as an effective tool to access tumor vascular characteristics. DCE-MRI can be used to characterize noninvasively, microvasculature providing information about tumor microvessel structure and function (e.g., tumor blood volume, vascular permeability, tumor perfusion). However, pixels of DCE-MRI represent a composite of more than one distinct functional biomarker (e.g., microvessels with fast or slow perfusion) whose spatial distributions are often heterogeneous. Complementary to various existing methods (e.g., compartment modeling, factor analysis), this paper proposes a blind source separation method which allows for a computed simultaneous imaging of multiple biomarkers from composite DCE-MRI sequences. The algorithm is based on a partially-independent component analysis, whose parameters are estimated using a subset of informative pixels defining the independent portion of the observations. We demonstrate the principle of the approach on simulated image data set, and we then apply the method to the tissue heterogeneity characterization of breast tumors where spatial distribution of tumor blood volume, vascular permeability, and tumor perfusion, as well as their time activity curves (TACs) are simultaneously estimated.
It is clinically important to develop novel approaches to accurately assess early response to chemoprevention. We propose to quantitatively measure changes of breast density and breast vascularity in glandular tissue to assess early response to chemoprevention. In order to accurately extract glandular tissue using pre- and post-contrast magnetic resonance (MR) images, non-rigid registration is the key to align MR images by recovering the local deformations. In this paper, a new registration method has been developed using finite-element deformable sheet-curve models to accurately register MR breast images for extraction of glandular tissue. Finite-element deformable sheet-curve models are coupling dynamic systems to physically model the boundary deformation and image deformation. Specifically, deformable curves are used to obtain a reliable matching of the boundaries using physically constrained deformations. A deformable sheet with the energy functional of thin-plate-splines is used to model complex local deformations between the MR breast images. Finite-element deformable sheet-curve models have been applied to register both digital phantoms and MR breast image. The experimental results have been compared to point-based methods such as the thin-plate-spline (TPS) approach, which demonstrates that our method is of a great improvement over point-based registration methods in both boundary alignment and local deformation recovery.
Non-rigid image registration is a prerequisite for many medical imaging applications such as change analysis in image-based diagnosis and therapy assessment. Nonlinear interpolation methods may be used to recover the deformation if the correspondence of the extracted feature points is available. However, it may be very difficult to establish such correspondence at an initial stage when confronted with large and complex deformation. In this paper, a mixture of principal axes registration (mPAR) is proposed to tackle the correspondence problem using a neural computation method. The feature is to align two point sets without needing to establish the explicit point correspondence. The mPAR aligns two point sets by minimizing the relative entropy between their probability distributions resulting in a maximum likelihood estimate of the transformation mixture. The neural computation for the mPAR is developed using a committee machine to obtain a mixture of piece-wise rigid registrations. The complete registration process consists of two steps: (1) using the mPAR to establish an improved point correspondence and (2) using a multilayer perceptron (MLP) neural network to recover the nonlinear deformation. The mPAR method has been applied to register a contrast-enhanced magnetic resonance (MR) image sequence. The experimental results show that our method not only improves the point correspondence but also results in a desirable error-resilience property for control point selection errors.
This paper presents a three-dimensional (3-D) tissue analysis method and its applications in partial volume correction and change analysis. The method uses a stochastic model-based approach and consists of two steps: (1) unsupervised tissue quantification and (2) 3-D segmentation. Firstly, the MR image volume is modeled by the standard finite normal mixture (SFNM) distribution. It has been shown that the SFNM converges to the true distribution when the pixel images are asymptotically independent. Secondly, the tissue quantification is achieved through (1) model selection by minimum description length (MDL) criterion; (2) parameter initialization by optimal histogram quantization and (3) parameter estimation by a fast EM algorithm using the global 3-D histogram rather than conventionally the raw data. Finally, we develop a 3-D segmentation method using the maximum likelihood (ML) classification and contextual Bayesian relaxation labeling (CBRL). The CBRL is developed to obtain a consistent labeling solution, based on localized SFNM formulation by using neighborhood contextual regularities. The method has been applied to partial volume correction for PET brain images and change analysis for MR breast images.
As a step toward understanding complex spatial distribution patterns of prostate cancers, a 3D master model of the prostate, showing major anatomical structures and probability maps of the location of tumors, has been pilot developed. A virtual environment supported by the 3D master model and in vivo imaging features, will be used to evaluate, simulate, and optimize the image guided needle biopsy and radiation therapy, thus potentially improving the efficacy of prostate cancer diagnosis, staging, and treatment. A deformable graphics algorithm has been developed to reconstruct the graphics models from 200 serially sectioned whole mount radical prostatectomy specimens and to support computerized needle biopsy simulations. For the construction of a generic model, a principal-axes 3D registration technique has been developed. Simulated evaluation and real data experiment have shown the satisfactory performance of the method in constructing initial generic model with localized prostate cancer placement. For the construction of statistical model, a blended model registration technique is advanced to perform a non-linear warping of the individual model to the generic model so that the prostate cancer probability distribution maps can be accurately positioned. The method uses a spine- surface model and a linear elastic model to dynamically deform both the surface and volume where object re-slicing is required. For the interactive visualization of the 3D master model, four modes of data display are developed: (1) transparent rendering of the generic model, (2) overlaid rendering of cancer distributions, (3) stereo rendering, and (4) true volumetric display, and a model-to-image registration technique using synthetic image phantoms is under investigation. Preliminary results have shown that use of this master model allows correct understanding of prostate cancer distribution patterns and rational optimization of prostate biopsy and radiation therapy strategies.
An advanced image analysis and graphics software is developed to reconstruct and visualize previously images prostate specimens to define tumor volume and distribution and pathways of needle biopsies, thus allowing improved understanding of prostate cancer behavior and current diagnosis-staging methodology. In order to reconstruct an accurate surface model of the surgical prostate, contour interpolation and surface reconstruction are performed on extracted contours of the object of interest. Contour interpolation increases the sample rate in the stacking direction in order to reconstruct sufficiently accurate surfaces of the prostate and its internal anatomical structures. An elastic contour model is developed through computing a force field between adjacent slices to deform the start contour gradually to conform to the target contour. A new finite-element deformable surface-spine model is then developed to reconstruct the computerized prostate model from the interpolated contours. A deformable spine of the prostate model is determined from its contours, and all the surface patches are contracted to the spine through expansion/compression forces radiating form the spine while the spine itself is also confined to the surface. The surface refinement is governed by a second-order partial differential equation from Lagrangian mechanics, and the refining process is accomplished when the energy of this dynamic deformable surface-spine model reaches its minimum. Interactive visualization is achieved by using the state-of- the-art 3D graphics toolkit, OpenInventor, with graphical user interface to visualize the reconstructed 3D prostate model including all internal anatomical structures and their relationships. Finally, an image-guided prostate needle biopsy simulation is implemented to validate current biopsy strategies on tumor detection and tumor volume estimation to improve prostate needle biopsy techniques.
In this paper, a statistically significant master model of localized prostate cancer is developed with pathologically- proven surgical specimens to spatially guide specific points in the biopsy technique for a higher rate of prostate cancer detection and the best possible representation of tumor grade and extension. Based on 200 surgical specimens of the prostates, we have developed a surface reconstruction technique to interactively visualize in the clinically significant objects of interest such as the prostate capsule, urethra, seminal vesicles, ejaculatory ducts and the different carcinomas, for each of these cases. In order to investigate the complex disease pattern including the tumor distribution, volume, and multicentricity, we created a statistically significant master model of localized prostate cancer by fusing these reconstructed computer models together, followed by a quantitative formulation of the 3D finite mixture distribution. Based on the reconstructed prostate capsule and internal structures, we have developed a technique to align all surgical specimens through elastic matching. By labeling the voxels of localized prostate cancer by '1' and the voxels of other internal structures by '0', we can generate a 3D binary image of the prostate that is simply a mutually exclusive random sampling of the underlying distribution f cancer to gram of localized prostate cancer characteristics. In order to quantify the key parameters such as distribution, multicentricity, and volume, we used a finite generalized Gaussian mixture to model the histogram, and estimate the parameter values through information theoretical criteria and a probabilistic self-organizing mixture. Utilizing minimally-immersive and stereoscopic interactive visualization, an augmented reality can be developed to allow the physician to virtually hold the master model in one hand and use the dominant hand to probe data values and perform a simulated needle biopsy. An adaptive self- organizing vector quantization method is developed to determine the optimal locations of selective biopsies where maximum likelihood of cancer detection and the best possible representation of tumor grade and extension can be achieved theoretically, thus allowing a comprehensive analysis of pathological information. The preliminary results show that a statistical pattern of localized prostate cancer exists, and a better understanding of disease patterns associated with tumor volume, distribution, and multicentricity of prostate carcinoma can be obtained from the computerized master model.
We present a predictive learning tree-structured vector quantization technique for medical image compression. A multi-layer perceptron (MLP) based vector predictor is employed to remove first as well as higher order correlations that exist among neighboring pixels. We use a learning tree-structured vector quantization (LTSVQ) scheme, which is based on competitive learning (CL) algorithm, to encode the residual vector. LTSVQ algorithm is computationally very efficient, easy to implement and provides performance comparable to that of LBG (Linde, Buzo and Gray) algorithm. We use computerized image analysis (image segmentation) as well as mean square error (MSE) and signal-to-noise ratio (SNR) to evaluate the quality of the compressed images. We apply the neural network based predictive LTSVQ to mammographic and magnetic resonance (MR) images, and evaluate the quality of images with different compression ratios.