Diffraction-enhanced imaging (DEI) is an emerging x-ray imaging method that simultaneously yields x-ray attenuation and refraction images and holds great promise for soft-tissue imaging. The DEI has been mainly studied using synchrotron sources, but efforts have been made to transition the technology to more practical implementations using conventional x-ray sources. The main technical challenge of this transition lies in the relatively lower x-ray flux obtained from conventional sources, leading to photon-limited data contaminated by Poisson noise. Several issues that must be understood in order to design and optimize DEI imaging systems with respect to noise performance are addressed. Specifically, we: (a) develop equations describing the noise properties of DEI images, (b) derive the conditions under which the DEI algorithm is statistically optimal, (c) characterize the imaging performance that can be obtained as measured by task-based metrics, and (d) consider image-processing steps that may be employed to mitigate noise effects.
Retrieving a set of known lesions similar to the one being evaluated might be of value for assisting radiologists to distinguish between benign and malignant clustered microcalcifications (MCs) in mammograms. In this work, we investigate how perceptually similar cases with clustered MCs may relate to one another in terms of their underlying characteristics (from disease condition to image features). We first conduct an observer study to collect similarity scores from a group of readers (five radiologists and five non-radiologists) on a set of 2,000 image pairs, which were selected from 222 cases based on their images features. We then explore the potential relationship among the different cases as revealed by their similarity ratings. We apply the multi-dimensional scaling (MDS) technique to embed all the cases in a 2-D plot, in which perceptually similar cases are placed in close vicinity of one another based on their level of similarity. Our results show that cases having different characteristics in their clustered MCs are accordingly placed in different regions in the plot. Moreover, cases of same pathology tend to be clustered together locally, and neighboring cases (which are more similar) tend to be also similar in their clustered MCs (e.g., cluster size and shape). These results indicate that subjective similarity ratings from the readers are well correlated with the image features of the underlying MCs of the cases, and that perceptually similar cases could be of diagnostic value for discriminating between malignant and benign cases.
In this work, we conducted an imaging study to make a direct, quantitative comparison of image features measured by
film and full-field digital mammography (FFDM). We acquired images of cadaver breast specimens containing
simulated microcalcifications using both a GE digital mammography system and a screen-film system. To quantify the
image features, we calculated and compared a set of 12 texture features derived from spatial gray-level dependence
matrices. Our results demonstrate that there is a great degree of agreement between film and FFDM, with the correlation
coefficient of the feature vector (formed by the 12 textural features) being 0.9569 between the two; in addition, a paired
sign test reveals no significant difference between film and FFDM features. These results indicate that textural features
may be interchangeable between film and FFDM for CAD algorithms.
In this paper, we present a numerical observer for image quality assessment, aiming to predict human observer accuracy
in a cardiac perfusion defect detection task for single-photon emission computed tomography (SPECT). In medical
imaging, image quality should be assessed by evaluating the human observer accuracy for a specific diagnostic task.
This approach is known as task-based assessment. Such evaluations are important for optimizing and testing imaging
devices and algorithms. Unfortunately, human observer studies with expert readers are costly and time-demanding. To
address this problem, numerical observers have been developed as a surrogate for human readers to predict human
diagnostic performance. The channelized Hotelling observer (CHO) with internal noise model has been found to predict
human performance well in some situations, but does not always generalize well to unseen data. We have argued in the
past that finding a model to predict human observers could be viewed as a machine learning problem. Following this
approach, in this paper we propose a channelized relevance vector machine (CRVM) to predict human diagnostic scores
in a detection task. We have previously used channelized support vector machines (CSVM) to predict human scores and
have shown that this approach offers better and more robust predictions than the classical CHO method. The comparison
of the proposed CRVM with our previously introduced CSVM method suggests that CRVM can achieve similar
generalization accuracy, while dramatically reducing model complexity and computation time.
In medical imaging, image quality is commonly assessed by measuring the performance of a human observer performing
a specific diagnostic task. However, in practice studies involving human observers are time consuming and difficult to
implement. Therefore, numerical observers have been developed, aiming to predict human diagnostic performance to
facilitate image quality assessment. In this paper, we present a numerical observer for assessment of cardiac motion in
cardiac-gated SPECT images. Cardiac-gated SPECT is a nuclear medicine modality used routinely in the evaluation of
coronary artery disease. Numerical observers have been developed for image quality assessment via analysis of
detectability of myocardial perfusion defects (e.g., the channelized Hotelling observer), but no numerical observer for
cardiac motion assessment has been reported. In this work, we present a method to design a numerical observer aiming
to predict human performance in detection of cardiac motion defects. Cardiac motion is estimated from reconstructed
gated images using a deformable mesh model. Motion features are then extracted from the estimated motion field and
used to train a support vector machine regression model predicting human scores (human observers' confidence in the
presence of the defect). Results show that the proposed method could accurately predict human detection performance
and achieve good generalization properties when tested on data with different levels of post-reconstruction filtering.
We have developed a realistic three-dimensional breast lesion phantom that can be computationally embedded in
physically-acquired background images of normal breast tissue. In order to develop new imaging techniques aimed at
the detection and diagnosis of breast lesions, a large number of lesions with varying physical characteristics must be
tested, especially if physical characteristics must be correlated with observed image features. The new tool presented
here, which incorporates three-dimensional tumor features, is potentially useful for testing imaging techniques such as
CT, tomosynthesis, and phase-sensitive X-ray imaging, as these require three-dimensional tissue models. The simulated
lesions improve significantly upon current methods, which lack the complexity and physical attributes of real tumors, by
incorporating a stochastic Gaussian random sphere model to simulate the central tumor mass and calcifications, and an
iterative fractal branching algorithm to model the complex spicula present in many tumors. Results show that userdefined
lesions with realistic features can be computationally embedded in mammographic background images and that
a wide range of physical properties can be modeled.
In this work, we present a four-dimensional reconstruction technique for cardiac gated SPECT images using a content-adaptive deformable mesh model. Cardiac gated SPECT images are affected by a high level of noise.
Noise reduction methods usually do not account for cardiac motion and therefore introduce motion blur-an artifact
that can decrease diagnostic accuracy. Additionally, image reconstruction methods typically rely on uniform
sampling and Cartesian griding for image representation. The proposed method utilizes a mesh representation
of the images in order to utilize the benefits of content-adaptive nonuniform sampling. The mesh model allows
for accurate representation of important regions while significantly compressing the data. The content-adaptive
deformable mesh model is generated by combining nodes generated on the full torso using pre-reconstructed emission
and attenuation images with nodes accurately sampled on the left ventricle. Ventricular nodes are further
displaced according to cardiac motion using our previously introduced motion estimation technique. The resulting
mesh structure is then used to perform iterative image reconstruction using a mesh-based maximum-likelihood
expectation-maximization algorithm. Finally, motion-compensated post-reconstruction temporal filtering is applied
in the mesh domain using the deformable mesh model. Reconstructed images as well as quantitative
evaluation show that the proposed method offers improved image quality while reducing the data size.
In this paper, we present a numerical observer for assessment of cardiac motion in nuclear medicine. Numerical
observers are used in medical imaging as a surrogate for human observers to automatically measure the diagnostic
quality of medical images. The most commonly used quality measurement is the detection performance in a detection
task. In this work, we present a new numerical observer aiming to measure image quality for the task of cardiac motiondefect
detection in cardiac SPECT imaging. The proposed observer utilizes a linear discriminant on features extracted
from cardiac motion, characterized by a deformable mesh model of the left ventricle and myocardial brightening.
Simulations using synthetic data indicate that the proposed method can effectively capture the cardiac motion and
provide an accurate prediction of the human observer performance.
We present a post-reconstruction motion-compensated spatio-temporal filtering method for noise reduction in cardiac
gated SPECT images. SPECT imaging suffers from low photon count due to radioactive dose limitations resulting in a
high noise level in the reconstructed images. This is especially true in gated cardiac SPECT where the total number of
counts is divided into a number of gates (time frames). Classical spatio-temporal filtering approaches, used in gated
cardiac SPECT for noise reduction, do not accurately account for myocardium motion and brightening and therefore
perform sub-optimally. The proposed post-reconstruction method consists of two steps: motion and brightening
estimation and spatio-temporal motion-compensated filtering. In the first step we utilize a left ventricle model and a
deformable mesh structure. The second step, which consists of motion-compensated spatio-temporal filtering, makes use
of estimated myocardial motion to enable accurate smoothing. Additionally, the algorithm preserves myocardial
brightening, a result of partial volume effect which is widely used as a diagnostic feature. The proposed method is
evaluated quantitatively to assess noise reduction and the influence on estimated ejection fraction.
It has been proposed that the sensitivity of breast lesion detection can be improved with phase-contrast mammographic
imaging. The recently introduced clinical system by Konica-Minolta, for example, reportedly yields enhanced lesion
detectability. We hypothesize that the use of an optimized x-ray spectrum will result in even better performance. To test
this hypothesis, we have performed a study of several clinical spectra from Mo and W sources over a broad spectral
range. In the study, we have incorporated established dose measurements from a simple breast phantom used in the
digital mammography literature, which has been updated to incorporate breast density properties in addition to
conventional attenuation information. Established phase-contrast imaging simulation techniques, which employed a
Fresnel propagator, were used to generate edge-enhanced radiographs for analysis. In addition, detector sensitivity and
tube loading parameters were incorporated into the analysis. The resulting mammography images were analyzed via
measurement of object edge-enhanced contrast.
Accurate models that describe the propagation of partially coherent wave fields and their interaction with refractive index inhomogeneities within a sample are required to optimally design X-ray phase-contrast imaging systems. Several methods have been proposed for the direct propagation of the second-order statistical properties of a wave field. One method, which has been demonstrated for x-ray microscopy, employs a single eikonal for propagation, approximating the phase by an average over the temporal Fourier components of the field. We have revisited this method by use of a
coherent mode model from classic coherence theory. Our analysis produces a variant of the transport of intensity equation for partially coherent wave fields.
KEYWORDS: Signal to noise ratio, Principal component analysis, Independent component analysis, Computer simulations, Monte Carlo methods, Data processing, Reconstruction algorithms, Neuroimaging, Functional magnetic resonance imaging, Brain
We propose a new method for analyzing fMRI (functional magnetic resonance imaging) data based on locally
linear embeddings (LLE). The LLE method is useful for analyzing data when there is a local structure
intrinsic to the measurements allowing for reconstruction of measurements from its neighboring points only.
We develop a method to extract the underlying temporal signal in fMRI experiments based on LLE.
Simulations show that improved results can be obtained under certain conditions when compared to
traditional methods such as the principal component analysis (PCA) and independent component analysis
KEYWORDS: Signal to noise ratio, Data modeling, Sensors, Signal attenuation, Image restoration, Tomography, Monte Carlo methods, Collimators, Single photon emission computed tomography, Expectation maximization algorithms
In this paper, we present a new methodology for calculation of a 2D projection operator for emission tomography
using a content-adaptive mesh model (CAMM). A CAMM is an efficient image representation based on adaptive
sampling and linear interpolation, wherein non-uniform image samples are placed most densely in regions having fine
detail. We have studied CAMM in recent years and shown that a CAMM is an efficient tool for data representation and
tomographic reconstruction. In addition, it can also provide a unified framework for tomographic reconstruction of organs
(e.g., the heart) that undergo non-rigid deformation. In this work we develop a projection operator model suitable for a
CAMM representation such that it accounts for several major degradation factors in data acquisition, namely object
attenuation and depth-dependent blur in detector-collimator response. The projection operator is calculated using a ray-tracing
algorithm. We tested the developed projection operator by using Monte Carlo simulation for single photon
emission tomography (SPECT). The methodology presented here can also be extended to transmission tomography.
Diffraction enhanced imaging (DEI) is an analyzer-based X-ray phase-contrast imaging method that measures
the absorption and refractive properties of an object. A well-known limitation of DEI is that it does not account
for ultra-small-angle X-ray scattering (USAXS), which is produced commonly by biological tissue. In this work,
an extended DEI (E-DEI) imaging method is described that attempts to circumvent this limitation. The EDEI
method concurrently reconstructs three images that depict an object's projected absorption, refraction,
and USAXS properties, and can be viewed as an implementation of the multiple-image radiography (MIR)
paradigm. Planar and computed tomography (CT) implementations of E-DEI and an existing MIR method are
compared by use of computer-simulation studies that employ statistical models to describe USAXS effects.
Conventional mammography is one of the most widely used diagnostic imaging techniques, but it has serious and well-known shortcomings, which are driving the development of innovative alternatives. Our group has been developing an x-ray imaging approach called multiple-image radiography (MIR), which shows promise as a potential alternative to conventional x-ray imaging (radiography). Like computed tomography (CT), MIR is a computed imaging technique, in which the images are not directly observed, but rather computed algorithmically. Whereas conventional radiography produces just one image depicting absorption effects, MIR simultaneously produces three images, showing separately the effects of absorption, refraction, and ultra-small-angle x-ray scattering. The latter two effects are caused by refractive-index variations in the object, which yield fine image details not seen in standard radiographs. MIR has the added benefits of dramatically lessening radiation dose, virtually eliminating scatter degradation, and lessening the importance of compressing the breast during imaging. In this paper we review progress to date on the MIR technique, focus on the basic physics and signal-processing issues involved in this new imaging method.
The human user is an often ignored component of the imaging chain. In medical diagnostic tasks, the human observer plays the role of the decision-maker, forming opinions based on visual assessment of images. In content-based image retrieval, the human user is the ultimate judge of the relevance of images recalled from a database. We argue that data collected from human observers should be used in conjunction with machine-learning algorithms to model and optimize
performance in tasks that involve humans. In essence, we treat the human observer as a nonlinear system to be identified. In this paper, we review our work in two applications of this general idea. In the first, a learning machine is trained to predict the accuracy of human observers in a lesion detection task for purposes of assessing image quality. In the second, a learning machine is trained to predict human users' perception of the similarity of two images for purposes
of content-based image retrieval from a database. In both examples, it is shown that a nonlinear learning machine can accurately identify the nonlinear human system that maps images into numerical values, such as detection performance or image similarity.
Herein we present a quantitative noise analysis of diffraction enhanced imaging (DEI), an x-ray imaging method that produces absorption and refraction images, with inherent immunity to wide-angle scatter. DEI can be used for planar imaging or computed tomography. DEI produces excellent images, but requires an x-ray source of very high power; therefore, it has principally been confined to synchrotron studies. Clinical systems currently under development using conventional x-ray sources will be photon-limited. Therefore, it is important that the noise properties of DEI be understood. We derive mathematical expressions for the noise statistics of DEI images, and show that the original formulation of DEI, given by Chapman, et al, is the maximum-likelihood solution of the image-estimation problem for the case of Poisson noise. However, we find that the standard DEI solution is only unbiased under particular conditions, which must be obeyed if good results are to be achieved. We also present the results of applying various noise-reduction filters, which we found to be very effective in reducing noise variance while introducing little bias.
In conventional computed tomography (CT) a single volumetric image representing the linear attenuation coefficient of an object is produced. For weakly absorbing tissues, the attenuation of the X-ray beam may not be the best description of disease-related information. In this work we present a new volumetric imaging method, called multiple-image computed tomography (MICT), that can concurrently produce several images from a set of measurements made with a single X-ray beam. MICT produces three volumetric images that represent the attenuation, refraction, ultra-small-angle scattering properties of an object. The MICT method is implemented to reconstruct images of a physical phantom and a biological object from measurement data produced by a synchroton light source. An iterative reconstruction method is employed for reconstruction of MICT images from experimental data sets that contains enhanced Poisson noise levels that are representative of future benchtop implementations of MICT. We also demonstrated that images produced by the DEI-CT method (the predecessor of MICT) can contain significant artifacts due to ultra-small-angle scattering effects while the corresponding MICT images do not.
We are comparing two different methods for obtaining the radiologists’ subjective impression of similarity, for application in distinguishing benign from malignant lesions. Thirty pairs of mammographic clustered calcifications were used in this study. These 30 pairs were rated on a 5-point scale as to their similarity, where 1 was nearly identical and 5 was not at all similar. After this, all possible combinations of pairs of pairs were shown to the reader (n=435) and the reader selected which pair was most similar. This experiment was repeated by the observers with at least a week between reading sessions. Using analysis of variance, intra-class correlation coefficients (ICC) were calculated for both absolute scoring method and paired comparison method. In addition, for the paired comparison method, the coefficient of consistency within each reader was calculated. The average coefficient of consistence for the 4 readers was 0.88 (range 0.49-0.97). These results were statistically significant different from guessing at p << 0.0001. The ICC for intra-reader agreement was 0.51 (0.37-0.66 95% CI) for the absolute method and 0.82 (0.73-0.91 95% CI) for the paired comparison method. This difference was statistically significant (p=0.001). For the inter-reader agreement, the ICC for the absolute method was 0.39 (0.21-0.57 95% CI) and 0.37 (0.18-0.56 95% CI) for the paired comparison method. We conclude that humans are able to judge similarity of clustered calcifications in a meaningful way. Further, radiologists had greater intra-reader agreement when using the paired comparison method than when using an absolute rating scale. Differences in the criteria used by different observers to judge similarity and differences in interpreting which calcifications comprise the cluster can lead to low ICC values for inter-reader agreement for both methods.
Proc. SPIE. 3978, Medical Imaging 2000: Physiology and Function from Multidimensional Images
KEYWORDS: Signal to noise ratio, Principal component analysis, Statistical analysis, Data modeling, Interference (communication), Medical imaging, Functional imaging, Analytical research, Positron emission tomography, Factor analysis
Factor analysis of medical image sequences (FAMIS), in which one concerns the problem of simultaneous identification of homogeneous regions (factor images) and the characteristic temporal variations (factors) inside these regions from a temporal sequence of images by statistical analysis, is one of the major challenges in medical imaging. In this research, we contribute to this important area of research by proposing a two-step approach. First, we study the use of the noise- adjusted principal component (NAPC) analysis developed by Lee et. al. for identifying the characteristic temporal variations in dynamic scans acquired by PET and MRI. NAPC allows us to effectively reject data noise and substantially reduce data dimension based on signal-to-noise ratio consideration. Subsequently, a simple spatial analysis based on the criteria of minimum spatial overlapping and non-negativity of the factor images is applied for extraction of the factors and factor images. In our simulation study, our preliminary results indicate that the proposed approach can accurately identify the factor images. However, the factors are not completely separated.
Information theory indicates that coding efficiency can be improved by utilizing high-order coding (HOEC). However, serious implementation difficulties limit the practical value of HOEC for grayscale image compression. In this paper we present a new approach, called binary-decomposed high-order entropy coding, that signifucantly reduces the complexity of the implementation and increases the accuracy in estimating the statistical model. In this appraoch a grayscale image is first decomposed into a group of binary sub-images. When HOEC is applied to these sub-images instead of the original image, the subsequent coding is made simpler and more accurate statistically. We apply this coding technique in lossless compression of medical images and imaging data, and demonstrate that the performance advantage of this approach is significant.
Positron emission tomography (PET) is a medical imaging modality which produces valuable functional information, but is limited by the poor image quality it provides. Considerable attention has been payed to the problem of reconstructing images in a way that produces better image resolution and noise properties. In dynamic imaging applications PET data are particularly noisy, thus preventing successful recovery of spatial resolution by signal processing applications. In this paper we show that smoothing of image data using a low-order approximation along the time axis can greatly enhance restoration performance.
Positron emission tomography (PET), as a biomedical imaging modality, is unique in its ability to provide quantitative information regarding biological function in a living subject. Unfortunately, its use has been hampered by the poor spatial resolution of the images produced, resulting primarily from the relatively large detectors used to acquire the tomographic measurements. In this paper, we show that by applying signal recovery to the data obtained by moving the detection system during the course of the measurement process, dramatic improvement in image quality can be obtained when detector size is, indeed, the factor limiting spatial resolution. The method of projections onto convex sets is used to recover (deblur) the sinogram, from which the image is reconstructed by conventional filtered backprojection. By making use of filtered backprojection in the reconstruction step, the computational burden commonly associated with iterative signal recovery is avoided; the proposed method adds only a few seconds to the total processing time. Simulation results demonstrate that the method is robust to misspecification of the point spread functions of the detection system as well as to the high levels of quantum noise inherent in PET.