The purpose is to develop and validate an automated method for detecting image unsharpness caused by patient motion blur in digital mammograms. The goal is that such a tool would facilitate immediate re-taking of blurred images, which has the potential to reduce the number of recalled examinations, and to ensure that sharp, high-quality mammograms are presented for reading. To meet this goal, an automated method was developed based on interpretation of the normalized image Wiener Spectrum. A preliminary algorithm was developed using 25 cases acquired using a single vendor system, read by two expert readers identifying the presence of blur, location, and severity. A predictive blur severity score was established using multivariate modeling, which had an adjusted coefficient of determination, R2 =0.63±0.02, for linear regression against the average reader-scored blur severity. A heatmap of the relative blur magnitude showed good correspondence with reader sketches of blur location, with a Spearman rank correlation of 0.70 between the algorithmestimated area fraction with blur and the maximum of the blur area fraction categories of the two readers. Given these promising results, the algorithm-estimated blur severity score and heatmap are proposed to be used to aid observer interpretation. The use of this automated blur analysis approach, ideally with feedback during an exam, could lead to a reduction in repeat appointments for technical reasons, saving time, cost, potential anxiety, and improving image quality for accurate diagnosis.
Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.
Computer Aided Diagnostic (CAD) systems are already of proven value in healthcare, especially for surgical planning, nevertheless much remains to be done. Gliomas are the most common brain tumours (70%) in adults, with a survival time of just 2-3 months if detected at WHO grades III or higher. Such tumours are extremely variable, necessitating multi-modal Magnetic Resonance Images (MRI). The use of Gadolinium-based contrast agents is only relevant at later stages of the disease where it highlights the enhancing rim of the tumour. Currently, there is no single accepted method that can be used as a reference. There are three main challenges with such images: to decide whether there is tumour present and is so localize it; to construct a mask that separates healthy and diseased tissue; and to differentiate between the tumour core and the surrounding oedema. This paper presents two contributions. First, we develop tumour seed selection based on multiscale multi-modal texture feature vectors. Second, we develop a method based on a local phase congruency based feature map to drive level-set segmentation. The segmentations achieved with our method are more accurate than previously presented methods, particularly for challenging low grade tumours.
KEYWORDS: Reconstruction algorithms, Positron emission tomography, Tissues, Plasma, Expectation maximization algorithms, Data modeling, Signal attenuation, Signal to noise ratio, 3D modeling, Statistical modeling
Conventional dynamic PET studies estimate pharmacokinetic parameters using a two-step procedure of first
reconstructing the spatial activity volume for each temporal frame independently before applying a pharmacokinetic
model to the resulting spatio-temporal activity distribution. This indirect procedure leads to low SNR
due to using only a subset of the temporal data when reconstructing each image. Our work concentrates on
the estimation of parameters directly from the (raw or pre-corrected) dPET temporal projections. We present
here a one-step direct pharmacokinetic algorithm based on the Ordered Subset (OS) Weighted Least Squares
(WLS) iterative estimation algorithm. We explicitly incorporate a priori temporal information by modelling the
Time Activity Curves (TACs) as a sum of exponentials convolved with an Input Function. Our OS-WLS-PK
algorithm is appropriate for both 3D projection data which has been Fourier Rebinned into 2D slices, as well as
when the data has been pre-corrected for attenuation, randoms and scatter. The main benefit of spectral analysis
applied to dynamic PET reconstruction is that no particular pharmacokinetic model needs to be specified a
priori, with only the input function needing to be sampled at scan time. We test our algorithm on highly realistic
SORTEO generated data and show it leads to more accurate parameter estimates than when conventional
graphical methods are used.
In this study, we examine the performance of the simultaneous algebraic reconstruction technique (SART) for
digital breast tomosynthesis under variations in key imaging parameters, such as the number of iterations,
number of projections, angular range, initial guess, radiation dose, etc. We use a real breast CT volume as a
ground truth digital phantom from which to simulate x-ray projections under the various selected conditions.
The reconstructed image quality is measured using task-based metrics, namely signal CNR and the AUC of a
Channelised Hotelling Observer with Laguerre-Gauss basis functions. The task at hand is a signal-known-exactly
(SKE) task, where the objective is to detect a simulated mass inserted into the breast CT volume.
KEYWORDS: Digital breast tomosynthesis, Breast, Detection and tracking algorithms, Sensors, Mammography, X-rays, Reconstruction algorithms, 3D image reconstruction, Image sensors, 3D image processing
We present a novel method for the detection and reconstruction in 3D of microcalcifications in digital breast
tomosynthesis (DBT) image sets. From a list of microcalcification candidate regions (that is, real microcalcification
points or noise points) found in each DBT projection, our method: (1) finds the set of corresponding points of a
microcalcification in all the other projections; (2) locates its 3D position in the breast; (3) highlights noise points; and (4)
identifies the failure of microcalcification detection in one or more projections, in which case the method predicts the
image locations of the microcalcification in the images in which they are missed.
From the geometry of the DBT acquisition system, an "epipolar curve" is derived for the 2D positions a
microcalcification in each projection generated at different angular positions. Each epipolar curve represents a single
microcalcification point in the breast. By examining the n projections of m microcalcifications in DBT, one expects
ideally m epipolar curves each comprising n points. Since each microcalcification point is at a different 3D position,
each epipolar curve will be at a different position in the same 2D coordinate system. By plotting all the
microcalcification candidates in the same 2D plane simultaneously, one can easily extract a representation of the number
of microcalcification points in the breast (number of epipolar curves) and their 3D positions, the noise points detected
(isolated points not forming any epipolar curve) and microcalcification points missed in some projections (epipolar
curves with less than n points).
Image-based medical diagnosis typically relies on the (poorly reproducible) subjective classification of textures
in order to differentiate between diseased and healthy pathology. Clinicians claim that significant benefits would
arise from quantitative measures to inform clinical decision making. The first step in generating such measures
is to extract local image descriptors - from noise corrupted and often spatially and temporally coarse resolution
medical signals - that are invariant to illumination, translation, scale and rotation of the features. The Dual-Tree Complex Wavelet Transform (DT-CWT) provides a wavelet multiresolution analysis (WMRA) tool e.g.
in 2D with good properties, but has limited rotational selectivity. Also, it requires computationally-intensive
steering due to the inherently 1D operations performed. The monogenic signal, which is defined in n >= 2D
with the Riesz transform gives excellent orientation information without the need for steering. Recent work has
suggested the Monogenic Riesz-Laplace wavelet transform as a possible tool for integrating these two concepts
into a coherent mathematical framework. We have found that the proposed construction suffers from a lack of
rotational invariance and is not optimal for retrieving local image descriptors. In this paper we show:
1. Local frequency and local phase from the monogenic signal are not equivalent, especially in the phase
congruency model of a "feature", and so they are not interchangeable for medical image applications.
2. The accuracy of local phase computation may be improved by estimating the denoising parameters while
maximizing a new measure of "featureness".
This paper summarises the work we have been doing on joint projects with GE
Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on
dynamic PET. First, we recall the salient facts about cancer and oncological image
analysis. Then we introduce some of the work that we have done on analysing clinical
MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and
segmentation of the circumferential resection margin. In the second part of the paper, we
shift attention to the complementary aspect of molecular image analysis, illustrating our
approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug
resistant tumours.
Simulated data is an important tool for evaluation of reconstruction and image processing algorithms in the frequent absence of ground truth, in-vivo data from living subjects. This is especially true in the case of dynamic PET studies, in which counting statistics of the volume can vary widely over the time-course of the acquisition. Realistic simulated data-sets which model anatomy and physiology, and make explicit the spatial and temporal image acquisition characteristics, facilitate experimentation with a wide range of the conditions anticipated in practice, and which can severely challenge algorithm performance and reliability. As a first example, we have developed a realistic dynamic FDG-PET data-set using the PET-SORTEO Monte Carlo simulation code and the MNI digital brain phantom. The phantom is a three-dimensional data-set that defines the spatial distribution of different tissues. Time activity curves were calculated using an impulse response function specified by generally accepted rate constants, convolved with an input function obtained by blood sampling, and assigned to grey and white matter tissue regions. We created a dynamic PET study using PET-SORTEO configured to simulate an ECAT Exact HR+. The resulting sinograms were reconstructed with all corrections, using variations of FBP and OSEM. Having constructed the dynamic PET data-sets, we used them to evaluate the performance of intensity-based registration as part of a tool for quantifying hyper/hypo perfusion with particular application to analysis of brain dementia scans, and a study of the stability of kinetic parameter estimation.
Ultrasound B-scan images often exhibit intensity inhomogeneities caused by non-uniform beam attenuation within the body. These cause major problems for image analysis, both by manual and computer-aided techniques, particularly the computation of quantitative measurements. We present a statistical model that exploits knowledge of tissue properties and intensity inhomogeneities in ultrasound for simultaneous contrast enhancement and image segmentation. The underlying model was originally proposed for correction of the B1 bias field distortion and segmentation of magnetic resonance (MR) images. A physics-based model of intensity inhomogeneities in ultrasound images shows that the bias field correction method is well suited to ultrasound B-scan images. The tissue class labeling and the intensity correction field are estimated using the maximum a posteriori (MAP) principle, in an iterative, multi-resolution manner. The algorithm has been applied to breast and cardiac ultrasound images. The results demonstrate that it can successfully remove intensity inhomogeneities caused by varying attenuation as well as uninteresting intensity changes of background tissues. With the removal of intensity inhomogeneities, significant improvement is achieved in tissue contrast and segmentation result.
The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain MR images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limitation -- no spatial information is taken into account. This causes the FM model to work only on well-defined images with low noise level. In this paper, we propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a Markov random field whose state sequence cannot be observed directly but which can be observed through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. To fit the HMRF model, an expectation-maximization (EM) algorithm is used. We show that by incorporating both the HMRF model and the EM algorithm into an HMRF-EM framework, an accurate and robust segmentation can be achieved, which is demonstrated by comparison experiments with the FM model-based segmentation.
Outdoor navigation poses a challenge since uneven road surface, sunshine, clutter background will cause problems for odometry, laser, and vision sensors. To improve position accuracy, single strip retroreflective beacons have been used in the localization process. But to match observed beacons during the motion of the robot is a problem since all beacons are identical. Also strong reflective objects in outdoors may cause false reading.In this paper, we describe how we use the extended Kalman filter algorithm to integrate data scanned from the laser scanner rotating at 2Hz and readings from other sensors. The results obtained from outdoor navigation are presented.
The wavelet transform is increasingly popular for mathematical scale- space analysis in various aspects of signal processing. The squared power and full-wave rectification of the wavelet transform coefficients are the most frequently features used for further processing. However it is shown in this paper that, in general, these features are coupled with the local phase component that depends not only on the analyzed signal but also on the analyzing wavelet at the scale. This dependency causes two problems: 'spurious' spatial variations of features at each scale; and the difficulty of associating features meaningfully across scales. To overcome these problems, we present a decoupled local energy and local phase representation of a real-valued wavelet transform by applying the Hilbert transform at each scale. We show that although local energy is equivalent to the power of the wavelet transform coefficients in term of energy conservation, they differ in scale-space. The local energy representation not only provides a phase-independent local feature at each scale, but also facilitate the analysis of similarity in scale-space. Applications of this decoupled representation to signal segmentation and the analysis of fractal signals are presented. Examples are given through out, using both real infra-red line scan signals and simulated Fractional Brownian Motion data.
This paper introduces a unified approach to trajectory planning and tracking for an industrial mobile robot subject to non-holonomic constraints. We show (1) how a smooth trajectory is generated that takes into account the constraints from the dynamic environment and the robot kinematics; and (2) how a general predictive controller works to provide optimal tracking capability for nonlinear systems. The tracking performance of the proposed guidance system is analyzed by simulation.
The design of a robot head, Neuto, for active computer vision tasks is described. The head/eye platform uses a common elevation configuration and has four degrees-of-freedom. All joints are driven by dc servo motors coupled with incremental optical encoders and minimum backlash gear-boxes. Details of the mechanical design, head controller design, architecture of the active vision system, and the performance of the head are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.