Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSIS Tomography
We previously introduced a new, effective Bayesian reconstruction method for transmission tomographic reconstruction that is useful in attenuation correction in single-photon-emission computed tomography (SPECT) and positron-emission tomography (PET). The Bayesian reconstruction method uses a novel object model (prior) in the form of a mixture of gamma distributions. The prior models the object as comprising voxels whose values (attenuation coefficients) cluster into a few classes. This model is particularly applicable to transmission tomography since the attenuation map is usually well-clustered and the approximate values of attenuation coefficients in each anatomical region are known. The reconstruction is implemented as a maximum a posteriori (MAP) estimate obtained by iterative maximization of an associated objective function. As with many complex model-based estimations, the objective is nonconcave, and different initial conditions lead to different reconstructions corresponding to different local maxima. To make it more practical, it is important to avoid such dependence on initial conditions. We propose and test a deterministic annealing (DA) procedure for the optimization. Deterministic annealing is designed to seek approximate global maxima to the objective, and thus robustify the problem to initial conditions. We present the Bayesian reconstructions with and without DA and demonstrate the independence of initial conditions when using DA. In addition, we empirically show that DA reconstructions are stable with respect to small measurement changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSISTomography
We describe ordered subsets (OS) algorithms applied to regularized expectation-maximization (EM) algorithms for emission tomography. Our reconstruction algorithms are based on a maximum a posteriori approach, which allows us to incorporate a priori information in the form of a regularizer to stabilize the unstable EM algorithm. In this work, we use two-dimensional smoothing splines as regularizers. Our motivation for using such regularizers stems from the fact that, by relaxing the requirement of imposing significant spatial discontinuities and using instead quadratic smoothing splines, solutions are easier to compute and hyperparameter calculation becomes less of a problem. To optimize our objective function, we use the method of iterated conditional modes, which is useful for obtaining convenient closed-form solutions. In this case, step sizes or line-search algorithms necessary for gradient-based descent methods are also avoided. We finally accelerate the resulting algorithm using the OS principle and propose a principled way of scaling smoothing parameters to retain the strength of smoothing for different subset numbers. Our experimental results indicate that our new methods provide quantitatively robust results as well as a considerable acceleration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSIS Tomography
We explore the feasibility of merging independent data sets to mitigate volume anisotropy intrinsic to tomosynthetic reconstructions. Two independent sets of orthogonally oriented projection data are obtained, respectively, from a hand phantom and from frozen breast tissues. Both objects are enclosed within radiolucent containers containing multiple fiducial reference objects. The latter facilitates registration of multiple projections produced by incrementally moving the x-ray source relative to the object about a single axis through a fixed series of angles. These data encompass maximum angular disparities up to 90 deg for each projection series. The resulting data are projectively transformed and nonlinearly processed using tuned-aperture computed tomography to yield a number of contiguous slices equal to the linear resolution of the sampled projections measured in pixels. The resulting slice data are then corrected for differential magnification, appropriately rotated, and linearly merged to yield a relatively complete, volumetrically isotropic representation of the phantom that could be visualized from any desired angle with negligible apparent tomosynthetic distortion. The resulting displays are evaluated subjectively and compared quantitatively with control images produced from optimum projection geometries. The results are consistent with the hypothesis that volume anisotropy intrinsic to tomosynthetic reconstructions can be minimized through integration of contiguously sampled orthogonal projections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSIS Probabilistic Models
We present a likelihood model for Bayesian nonrigid image registration that relates the distinct acquisition models of different MRI (magnetic resonance imaging) scanners. The model is derived from a Bayesian network that represents the imaging situation under consideration to construct the appropriate similarity measure for the given situation. The method is compared to the cross-correlation and mutual information measures in a set of registration experiments on different images and over different synthetically generated geometric and intensity distortions. The probability-based similarity measure yields, on average, more accurate and robust registrations than either the cross-correlation or mutual information measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present an algorithm for segmentation of computed radiography (CR) images of extremities into bone and soft tissue regions. The algorithm is region-based in which the regions are constructed using a region-growing procedure based on two different statistical tests. Following the region-growing process, a tissue classification method is employed. The purpose of the classification is to label each region as either bone or soft tissue. This binary classification goal is achieved by using a voting procedure that consists of the clustering of regions in each neighborhood system into two classes. The voting procedure provides a crucial compromise between the local and the global analysis of the image, which is necessary due to strong exposure variations seen on the imaging plate. Also, the existence of regions whose size is large enough such that exposure variations can be observed through them makes it necessary to use overlapping blocks during the classification. After the tissue classification step, the resulting bone and soft tissue regions are refined by fitting a second-order surface to each tissue, and reevaluating the label of each region according to the distance between the region and surfaces. The performance of the algorithm is tested on a variety of extremity images using manually segmented images as the gold standard. The experiments show that our algorithm provides a bone boundary with an average area overlap of 90% compared to the gold standard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSISProbabilistic Models
Modified implementations of simulated annealing (SA) for image segmentation are proposed and evaluated. The segmentation procedure is based on a Markov random field (MRF) model for describing regions within an image. SA offers an iterative approach for computing a set of labels with maximum a posteriori (MAP) probability. However, this approach is computationally expensive and lacks robustness in noisy environments. We propose a random cost function (RCF) for computing a posterior energy function in SA. The proposed modified SA (SA-RCF) method depicts more robust performance for image segmentation than standard SA at the same computational cost. Alternatively, we proposed a multi-resolution (MR) approach based on MRF, which offers robust segmentation for noisy images with significant reduction in the computational cost. Computational cost and segmentation accuracy of each algorithm were examined using a set of simulated head computerized tomography (CT) phantoms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSIS Probabilistic Models
Recently, research in the field of content-based image retrieval has attracted a lot of attention. Nevertheless, most existing methods cannot be easily applied to medical image databases, as global image descriptions based on color, texture, or shape do not supply sufficient semantics for medical applications. The concept for content-based image retrieval in medical applications (IRMA) is therefore based on the separation of the following processing steps: categorization of the entire image; registration with respect to prototypes; extraction and query-dependent selection of local features; hierarchical blob representation including object identification; and finally, image retrieval. Within the first step of processing, images are classified according to image modality, body orientation, anatomic region, and biological system. The statistical classifier for the anatomic region is based on Gaussian kernel densities within a probabilistic framework for multiobject recognition. Special emphasis is placed on invariance, employing a probabilistic model of variability based on tangent distance and an image distortion model. The performance of the classifier is evaluated using a set of 1617 radiographs from daily routine, where the error rate of 8.0% in this six-class problem is an excellent result, taking into account the difficulty of the task. The computed posterior probabilities are furthermore used in the subsequent steps of the retrieval process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSIS Active Models
A novel approach based on a multiscale edge-guided wavelet snake model is developed for deliniation of pulmonary nodules in digital chest radiographs. The approach is applied to the differentiation of nodules and false positives reported by our computer-aided diagnosis (CAD) scheme for detection of nodules. The wavelet snake is a deformable contour that is designed to identify the boundary of a round object. The shape of the snake is determined by a set of wavelet coefficients in a certain range of scales. Portions of the boundary of a nodule are first extracted by multiscale edge representation. Then the multiscale edges are fitted by deformation of the shape of the snake through a change in the wavelet coefficients by use of a gradient descent algorithm. The degree of overlap between the fitted snake and the multiscale edges is calculated as a measure for classification of nodules and false detections. A total of 242 regions of interest, consisting of 90 nodules and 152 false positives, reported by our existing CAD scheme are used for evaluation of our method by means of receiver operating characteristic (ROC) analysis. The false positives are difficult to distinguish from nodules, because they cannot be removed, even though various methods for false-positive elimination processes are employed in our CAD scheme. Our method based on the multiscale edge-guided snake model yields an area under the ROC curve of 0.74, which can eliminate 15% of false positives with the sacrifice of only one nodule. The result indicates that our method appears to be effective in the classification of nodules and false positives, even when difficult false positives are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Signs and symptoms of tracheal stenosis can create confusion about the etiology of the problem. While bronchoscopy is the diagnostic method of choice to evaluate the extension and localization of the lesion, the use of x-ray computed axial tomography (CAT) images has also been considered. Recent works on airway segmentation in CAT images propose the extensive use of automatic segmentation techniques based on 3-D region growing. This technique is computationally expensive and thus alternative analysis procedures are still under development. We present a segmentation method constructed over an active surface model based on cubic splines interpolation. The 3-D rendering of the upper-airway path segmented from neck and thorax CAT scans using the proposed method is validated in regard to its possible use as a diagnostic tool for the characterization of tracheal stenosis. The results presented relative to the performance of the model, both on synthetic and real CAT scan volumes, indicate that the proposed procedure improves over the reference active model methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSISActive Models
A new method to automatically locate the optic disk and estimate its shape in color retinal images is proposed. Principal component analysis (PCA) is applied to the candidate regions at various scales to locate the optic disk. The minimum distance between the original retinal image and its projection onto "disk spaces" indicates the center of the optic disk. The shape of optic disk is obtained by an active shape method in which affine transformation is used to transform the shape model from shape space to image space. The effects of vessels present inside and around optic disk are not eliminated, but also incorporated in the processing. The proposed algorithm takes advantage of top-down strategy that can achieve more robust results especially with the presence of large areas of light lesions and when the edge of the optic disk is partly occluded by vessels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSIS Contraints
We present a new method for evaluating the performance of nonrigid image registration algorithms by analyzing the invertibility and transitivity properties of the transformations that they produce. The invertibility and transitivity of transformations computed using a unidirectional and a consistent linear-elastic registration algorithm are evaluated. The invertibility of the transformations is evaluated by comparing the composition of transformations from images A to B and B to A to the identity mapping. The transitivity of the transformations is evaluated by measuring the difference between the identity mapping and the composition of the transformations from images A to B, B to C, and C to A. Transformations are generated by matching three computer-generated phantoms, three computed tomography (CT) data of infant heads, and 23 magnetic resonance imaging (MRI) data of adult brains. In all cases, the inverse consistency constraint (ICC) algorithm out-performs the unidirectional algorithm by producing transformations that have less inverse consistency error and less transitivity error. For the MRI brain data, the ICC algorithm reduced the maximum inverse consistency error by 205 times, the average transitivity error by 50%, and the maximum transitivity error by 37% on average compared to the unidirectional algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSISFeatures
The visualization of the left ventricle (LV) motion in gated single-photon-emission computerized tomography (SPECT) studies is complicated by the fact that 3-D density images cannot be directly presented using common display devices. A number of techniques, most of them concerned with visualization, have been developed to aid in the classification of the images. However, it has been shown that interpretation of LV images by strictly visual techniques is subject to errors and inconsistencies. For this reason, assistance in diagnosis can be improved only through the development of automatic or semiautomatic methods to analyze and to quantify LV parameters. We propose an automatic method to estimate the myocardial kinetic energy directly from gated SPECT sequences based on the optical flow method refined with a multiresolution technique. Specifically, the method quantifies the LV motion by a series of 3-D velocity vector fields computed for each voxel on the sequence of images. The 3-D velocity vector field obtained is used to estimate the kinetic energy, which may be an indication of the cardiac condition. The proposed procedure was applied to a group of volunteers and the cardiac condition of each subject studied by taking the relation between the maximum and minimum values of kinetic energy observed during the cardiac cycle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
SPECIAL SECTION ON MODEL-BASED MEDICAL IMAGE PROCESSING AND ANALYSIS Features
Segmentation of anatomical regions of the brain is one of the fundamental problems in medical image analysis. It is traditionally solved by iso-surfacing or through the use of active contours/deformable models on a gray-scale magnetic resonance imaging (MRI) data. We develop a technique that uses anisotropic diffusion properties of brain tissue available from diffusion tensor (DT)-MRI to segment brain structures. We develop a computational pipeline starting from raw diffusion tensor data through computation of invariant anisotropy measures to construction of geometric models of the brain structures. This provides an environment for user-controlled 3-D segmentation of DT-MRI datasets. We use a level set approach to remove noise from the data and to produce smooth, geometric models. We apply our technique to DT-MRI data of a human subject and build models of the isotropic and strongly anisotropic regions of the brain. Once geometric models have been constructed they can be combined to study spatial relationships and quantitatively analyzed to produce the volume and surface area of the segmented regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present a method of automatic graph construction for the description of the geometric structure of nerve cells from 3-D confocal scans. The method consists of tracing the branch center points, in the branch axial direction using as hints the location of difficult regions inside the neuronal branches. The axis were obtained in previous work by computing pairwise vector products of intersecting gradients associated with across-scales validated boundary edge points of the neuronal branches. The axis anchor points are the branch center points, which are estimated as the "center of mass" of all intersecting gradient end points. The difficult regions are the axis anchor points having a high directional variance of vector products contributing to the associated axis. The presented algorithm, which uses all the information obtained from preprocessing, is robust to variable contrast; has little sensitivity to boundary irregularities; is adaptive to variability of branch geometry; and produces a sparse, topology preserving graph of the neuron under investigation. A subsequent surface reconstruction based on this graph (Schmitt et al., 2001) accompanied by the labeling of the graph with geometric measurements would be feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Techniques based on thresholding of wavelet coefficients are gaining popularity for denoising data. The idea is to transform the data into the wavelet basis, where the "large" coefficients are mainly the signal, and the "smaller" ones represent the noise. By suitably modifying these coefficients, the noise can be removed from the data. We evaluate several 2-D denoising procedures using test images corrupted with additive Gaussian noise. We consider global, level-dependent, and subband-dependent implementations of these techniques. Our results, using the mean squared error as a measure of the quality of denoising, show that the SureShrink and the BayesShrink methods consistently outperform the other wavelet-based techniques. In contrast, we found that a combination of simple spatial filters lead to images that were grainier with smoother edges, though the error was smaller than in the wavelet-based methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In image databases, variations in imaging conditions and preprocessing may result in similar originals that exhibit a low measure of similarity when color information is used in standard image retrieval methods. We examine the performance of various color-based retrieval strategies to see whether, and to what degree, the effectiveness of retrieval improves with Retinex-based preprocessing, regardless of the strategy adopted. The results of experiments performed on four different databases are reported and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We propose a new method to compress error-diffused bilevel images with resolution scalability. This method is a combination of inverse halftoning and rehalftoning. For the inverse halftoning, we combine 2×2 dots into a single pixel of a resolution-reduced image, where each pixel has a multilevel value of 0, 1, 2, 3, and 4. After the inverse halftoning, the resolution-reduced multilevel image is halftoned by using an error diffusion algorithm. Thus, the resolution of the error-diffused bilevel images can be reduced by repetition of the inverse halftoning and rehalftoning processes. After reducing the image size, we encode an error-diffused bilevel image progressively from the lowest resolution image to the highest resolution image. To encode higher resolution images, we use the information in the previously coded lower resolution image. Though the compression ratios of the proposed algorithm are similar to those of progressive Joint Bilevel Image Processing Group (JBIG), the image quality of the resolution-reduced image from the proposed algorithm is much better than that from the progressive JBIG.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
After a review of the circle fitting issue, we recall a relatively unknown method derived from a classical geometric result. We propose an improvement of this technique by reweighting the data, iterating the procedure, and choosing at every step as the new inversion point the one diametrically opposite to the previous inversion point.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The processing of bank checks is one application that continues to rely heavily on the movement of paper. Checks are currently read by human eyes and physically transported to the bank of the payer, involving significant time and cost. Since paper checks constitute a popular mechanism for noncash payments, and the volume of checks continues to be high, there is a significant interest in the banking industry for new approaches that can read paper checks automatically. We propose a new approach to read the numerical amount field on the check; this field is also called the courtesy amount field. In the case of check processing, the segmentation of unconstrained strings into individual digits is a challenging task because one must accommodate special cases involving connected or overlapping digits, broken digits, and digits physically connected to a piece of stroke that belongs to a neighboring digit. The described system involves three stages: the segmentation of the string into a series of individual characters, the normalization of each isolated character, and the recognition of each character based on a neural network classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This PDF file contains the editorial “Book Rvw, The MPEG-4 Book," by Fernando Pereira and Touradj Ebrahimi for JEI Vol. 12 Issue 01
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.