A quantitative description of the deterministic properties of a CT system is necessary when evaluating image
quality. The most common such metric is the modulation transfer function (MTF), usually calculated from a
line spread function (LSF) or point spread function (PSF). Currently, there exist many test objects used to
measure the LSF or PSF. In this paper we report on a comparison of these measures using: a thin foil slit test
object, a teflon cube edge test object and a novel "negative" cube test object. Images were acquired using a
custom-built bench-top flat-panel-based cone-beam CT scanner and a cylindrical water-filled PMMA phantom
with the test objects embedded in the middle. From the 3-dimensional reconstructed volumes, we estimated the
LSF either directly or as estimated from the edge spread function. From these, a modulation transfer function
can be estimated, and the frequency dependent image transfer of each object can be reported.
KEYWORDS: Modulation transfer functions, Signal to noise ratio, Computed tomography, Image quality standards, Scanners, Medical imaging, Spatial frequencies, Imaging systems, Point spread functions, Physics
The current IEC standard method for characterizing noise in CT scanners is based on the pixel standard deviation of the CT image of a water-equivalent uniform phantom. However, the standard deviation does not account for correlations in the noise, potentially generating misleading results about image quality. With this paper we
investigate a method for estimating the Fourier based noise power spectrum (NPS) for the characterization of noise in CT, for CT scanners with linear, non-adaptive reconstruction algorithms. The IEC currently evaluates the deterministic properties of CT scanners with the Fourier based modulation transfer function (MTF). By accounting for the spatial correlations in both the stochastic and deterministic description of an imaging system, the system signal-to-noise ratio (SNR) can be determined more accurately. In this paper we investigate a method for estimating the MTF and the NPS of a CT scanner in the axial plane. Furthermore, we present examples of the Fourier SNR calculated from the MTF and the NPS in order to demonstrate that it gives more reasonable results than the pixel SNR. The MTF was estimated by following methods available in current literature. For the characterization of noise we used a standard water phantom, while for the point spread function (PSF) we used a tungsten wire phantom in air. Images were taken at four different source current settings and reconstructed with four different lters. We showed that the pixel SNR ranks the reconstruction lters differently from the Fourier SNR.
Studies suggest that dose to the breast leads to a higher lifetime attributable cancer incidence risk from a chest CT scan
for women compared to men. Numerous methods have been proposed for reducing dose to the breast during CT
scanning, including bismuth shielding, tube current modulation, partial-angular scanning, and reduced kVp. These
methods differ in how they alter the spectrum and fluence across projection angle. This study used Monte Carlo CT
simulations of a voxelized female phantom to investigate the energy (dose) deposition in the breast as a function of both
photon energy and projection angle. The resulting dose deposition matrix was then used to investigate several questions
regarding dose reduction to the breast: (1) Which photon energies deposit the most dose in the breast, (2) How does
increased filtration compare to tube current reduction in reducing breast dose, and (3) Do reduced kVp scans reduce dose
to breast, and if so, by what mechanism? The results demonstrate that while high-energy photons deposit more dose per
emitted photon, the low-energy photons deposit more dose to the breast for a 120 kVp acquisition. The results also
demonstrate that decreasing the tube current for the AP views to match the fluence exiting a shield deposits nearly the
same dose to the breast as when using a shield (within ~1%). Finally, results suggest that the dose reduction observed
during lower kVp scans is caused by reduced photon fluence rather than the elimination of high-energy photons from the
beam. Overall, understanding the mechanisms of dose deposition in the breast as a function of photon energy and
projection angle enables comparisons of dose reduction methods and facilitates further development of optimized dose
reduction schemes.
KEYWORDS: Signal to noise ratio, Computed tomography, Imaging systems, Medical imaging, Scanners, Physics, Current controlled current source, Interference (communication), Computing systems, Polymethylmethacrylate
In order to compare different imaging systems, it is necessary to obtain detailed information about the system noise, its deterministic properties and task specic signal-to-noise ratio (SNR). The current standard method for characterizing noise in CT scanners is based on the pixel standard deviation of the image of a water-equivalent
uniform phantom. The Fourier-based noise power spectrum (NPS)improves on the limitations of the pixel standard deviation by accounting for noise correlations. However, it has been shown that the Fourier-methods used to describe the system performance result in systematic errors as they make some limiting assumptions such as shift invariance and wide sense stationarity, which are not satised by real CT systems. For a more general characterization of the imaging system noise, a covariance matrix eigenanalysis can be performed. In this paper we present the experimental methodology for the evaluation of the noise of computed tomography systems. We used a bench-top at-panel-based cone-beam CT scanner and a cylindrical water-lled PMMA phantom. For the 3-dimensional reconstructed volume, we calculated the covariance matrix, its eigenvectors and eigenvalues for the xy-plane as well as for the yz-plane, and compared the results with the NPS. Furthermore, we analyzed the location-specic noise in the images. The evaluation of the noise is a rst step toward determining the task-specic SNR.
Accurately representing radiation dose delivered in MSCT is becoming a concern as the maximum beam width
of some modern CT scanners tends to become wider than the 100 mm charge-collection length of the pencil
ionization chamber generally used in CT dosimetry. We investigate two alternative methods of dose evaluation in
CT scanners. We investigate two alternative approaches for better characterization of CT dose than conventional
evaluation of CTDI100. First, we simulate dose profiles and energy deposition in phantoms longer than the
typically used 14-15 cm length right-circular cylinders. Second we explore the accuracy and practicality of
applying mathematical convolution to a scatter kernel in order to generate dose profiles. A basic requirement for
any newly designed phantom is that it be able to capture approximately the same dose as would an infinitely long
cylinder, but yet be of a size and weight that a person could easily carry and position. Using the PENELOPE
Monte Carlo package, we simulated dose profiles in cylindrical polymethyl methacrylate (PMMA) phantoms of
10, 16, 20, 24 and 32 cm diameter and 15, 30 and 300 cm length. Beam widths were varied from 1 cm to 60
cm. Lengths necessary to include within the dose integrals values associated with the scatter tails as well as
with the primary radiation of the profile were then calculated as the full width at five percent of maximum dose.
The resulting lengths suggest that to accommodate wide beam widths, phantoms longer than those currently
used are necessary. The results also suggest that using a longer phantom is a relatively more accurate approach,
while using mathematical convolution is simpler and more practical to implement than using the long phantoms
designed according to direct Monte Carlo simulations.
The simulation of imaging systems using Monte Carlo x-ray transport codes is a computationally intensive
task. Typically, many days of computation are required to simulate a radiographic projection image and, as
a consequence, the simulation of the hundreds of projections needed to perform a tomographic reconstruction
may require an unaffordable amount of computing time. To speed up x-ray transport simulations, a MC code
that can be executed in a graphics processing unit (GPU) was developed using the CUDATM programming
model, an extension to the C language for the execution of general-purpose computations on NVIDIA's GPUs.
The code implements the accurate photon interaction models from PENELOPE and takes full advantage of the
GPU massively parallel architecture by simulating hundreds of particle tracks simultaneously. In this work we
describe a new version of this code adapted to the simulation of computed tomography (CT) scans, and allowing the execution in parallel in multiple GPUs. An example simulation of a cardiac CT using a detailed voxelized anthropomorphic phantom is presented. A comparison of the simulation computational performance in one or multiple GPUs and in a CPU (Central Processing Unit), and a benchmark with a standard PENELOPE code, are provided. This study shows that low-cost GPU clusters are a good alternative to CPU clusters for Monte Carlo simulation of x-ray transport.
KEYWORDS: Signal to noise ratio, Image processing, Imaging systems, Modulation transfer functions, Image acquisition, Sensors, Digital mammography, Breast, Mammography, Solids
Pixel Signal to Noise Ratio (SNR) is a commonly used clinical metric for evaluating mammography. However,
we showed in this paper, the pixel SNR can produce misleading system detectability when image processing
is utilized. We developed a simple, reliable and clinically applicable methodology to evaluate mammographic
imaging systems using a task SNR that accounts for the imaging system performance in the presence of the
patient. We used the Hotelling observer method in spatial frequency domain to calculate the task SNR of small
disk test objects embedded in the breast tissue-equivalent series (BRTES) phantom for GE Senographe DS Full Field Digital Mammography (FFDM) system. The results were compared to the calculation of pixel SNR. We calculated the Hotelling observer SNR by estimating the generalized modulation transfer function (GMTF), generalized normalized noise power spectrum (GNNPS) and generalized noise equivalent quanta (GNEQ) in the presence of the breast phantom. The task SNR we calculated increased with the square root of the exposure as expected. Furthermore, we showed that the method is stable under image processing. The task SNR is a more reliable method for evaluating the performance of imaging systems especially under realistic clinical conditions where patient equivalent phantoms or image processing is used.
KEYWORDS: Signal to noise ratio, Sensors, Imaging systems, Modulation transfer functions, Interference (communication), Quantum electronics, Signal detection, X-ray detectors, Digital x-ray imaging, Arteries
For task specific evaluation of imaging systems it is necessary to obtain detailed descriptions of their noise and
deterministic properties. In the past we have developed an experimental and theoretical methodology to estimate
the deterministic detector response of a digital x-ray imaging system, also known as the H matrix. In this paper
we have developed the experimental methodology for the evaluation of the quantum and electronic noise of
digital radiographic detectors using the covariance matrix K. Using the H matrix we calculated the transfer
of a simulated coronary artery constriction through an imaging system's detector, and with the covariance
matrix we calculated the detectability (or Signal-to-Noise Ratio) and the detection probability. The eigenvalues
and eigenvectors of the covariance matrix were presented and the electronic and quantum noise were analyzed.
We found that the exposure at which the electronic noise equals the quantum noise at 90 kVp was 0.2 μR. We
compared the ideal Hotelling observer with the Fourier definition of the SNR for a toroidal stenosis on a cylindrical
vessel. Because of the shift-invariance and cyclo-stationarity assumptions, the Fourier SNR overestimates the
performance of imaging systems. This methodology can be used for task specific evaluation and optimization of
a digital x-ray imaging system.
KEYWORDS: Imaging systems, X-rays, X-ray imaging, 3D image processing, Signal to noise ratio, Breast imaging, Mammography, Signal detection, Breast, Optical spheres
For the last few years, development and optimization of three-dimensional (3D) x-ray breast imaging systems,
such as breast tomosynthesis and computed tomography, has drawn much attention from the medical imaging
community, either academia or industry. However, the trade offs between patient safety and the efficacy of the
devices have yet to be investigated with use of objective performance metrics. Moreover, as the 3D imaging
systems give depth information that was not available in planar mammography, standard mammography quality
assurance and control (QA/QC) phantoms used for measuring system performance are not appropriate since they
do not account for background variability and clinically relevant tasks. Therefore, it is critical to develop QA/QC
methods that incorporate background variability with use of a task-based statistical assessment methodology.1
In this work, we develop a physical phantom that simulates variable backgrounds using spheres of different
sizes and densities, and present an evaluation method based on statistical decision theory,2 in particular, with
use of the ideal linear observer, for evaluating planar and 3D x-ray breast imaging systems. We demonstrate
our method for a mammography system and compare the variable phantom case to that of a phantom of the
same dimensions filled with water. Preliminary results show that measuring the system's detection performance
without consideration of background variability may lead to misrepresentation of system performance.
KEYWORDS: Signal to noise ratio, Sensors, Imaging systems, Mammography, Signal detection, Polymethylmethacrylate, X-rays, X-ray detectors, Interference (communication), Laser range finders
A common method for evaluating projection mammography is Contrast-Detail (CD) curves derived from the CD
phantom for Mammography (CDMAM). The CD curves are derived either by human observers, or by automated
readings. Both methods have drawbacks which limit their reliability. The human based reading is significantly
affected by reader variability, reduced precision and bias. On the other hand, the automated methods suffer from
limited statistics. The purpose of this paper is to develop a simple and reliable methodology for the evaluation
of mammographic imaging systems using the Signal Known Exactly/Background Known Exactly (SKE/BKE)
detection task for signals relevant to mammography. In this paper, we used the spatial definition of the ideal,
linear (Hotelling) observer to calculate the task-specific SNR for mammography and discussed the results. The
noise covariance matrix as well as the detector response H matrix of the imaging system were estimated and
used to calculate the SNRSKEBKE for the simulated discs of the CDMAM. The SNR as a function of exposure,
disc diameter and thickness were calculated.
We developed an efficient, depth- and energy-dependent Monte Carlo model for columnar CsI detectors. The
optical photon, electron/positron Monte Carlo package MANTIS developed by our group, was used to generate
optical photon response and collection efficiency as a function of the x-ray/electron interaction depth for a
realistic scintillator geometry. The detector geometry we used for the simulations was reported in the past and
is based on a 500 μm thick columnar CsI scintilator. The resulting depth-dependent optical photon responses
were fit to a parametrized Gaussian mixture model. The model parameters were the depth-dependent radial
shift of the response peak, the depth dependent widths of the Gaussians, and the depth-dependent magnitude
of the Gaussians in the mixture. The depth-dependent optical spread has a maximum spatial shift of 53 μm.
The optical collection efficiency at the photo-diode layer followed a power law varying from 90% for interactions
at the scintillator exit surface to 20% for interactions at the detector entrance. The responses were consequently
incorporated into penMesh, a PENELOPE based Monte Carlo x-ray, electron/positron transport simulation
package for generating clinically realistic images of triangular mesh phantoms. The resulting detector responses
from this empirical model were compared against the full x-ray/electron/optical photon simulation using the
package MANTIS, showing good agreement. The simulation speed, using the optical transport model in penMesh,
increases by two orders of magnitude compared to MANTIS.
We have constructed a fourth generation anthropomorphic phantom which, in addition to the realistic description of the
human anatomy, includes a coronary artery disease model. A watertight version of the NURBS-based Cardiac-Torso
(NCAT) phantom was generated by converting the individual NURBS surfaces of each organ into closed, manifold and
non-self-intersecting tessellated surfaces. The resulting 330 surfaces of the phantom organs and tissues are now comprised
of ~5×106 triangles whose size depends on the individual organ surface normals. A database of the elemental composition of each organ was generated, and material properties such as density and scattering cross-sections were defined using
PENELOPE. A 300 μm resolution model of a heart with 55 coronary vessel segments was constructed by fitting smooth
triangular meshes to a high resolution cardiac CT scan we have segmented, and was consequently registered inside the torso
model. A coronary artery disease model that uses hemodynamic properties such as blood viscosity and resistivity was used
to randomly place plaque within the artery tree. To generate x-ray images of the aforementioned phantom, our group has
developed an efficient Monte Carlo radiation transport code based on the subroutine package PENELOPE, which employs
an octree spatial data-structure that stores and traverses the phantom triangles. X-ray angiography images were generated
under realistic imaging conditions (90 kVp, 10° Wanode spectra with 3 mm Al filtration, ~5×1011 x-ray source photons, and 10% per volume iodine contrast in the coronaries). The images will be used in an optimization algorithm to select the
optimal technique parameters for a variety of imaging tasks.
KEYWORDS: Particles, Monte Carlo methods, Computer simulations, Sensors, Angiography, X-ray imaging, Imaging systems, Data modeling, Medical imaging, Electrons
X-ray imaging system optimization increases the benefit-to-cost ratio by reducing the radiation dose to the patient while maximizing image quality. We present a new simulation tool for the generation of realistic medical x-ray images for assessment and optimization of complete imaging systems.
The Monte Carlo code simulates radiation transport physics using the subroutine package PENELOPE, which accurately simulates the transport of electrons and photons within the typical medical imaging energy range. The new code implements a novel object-oriented geometry package that allows simulations with homogeneous objects of arbitrary shapes described by triangle meshes. The flexibility of this code, which uses the industry standard PLY input-file format, allows the use of detailed anatomical models developed using computer-aided design tools applied to segmented CT and MRI data. The use of triangle meshes highly simplifies the ray-tracing algorithm without reducing the generality of the code, since most surface models can be tessellated into triangles while retaining their geometric details. Our algorithm incorporates an octree spatial data structure to sort the triangles and accelerate the simulation, reaching execution speeds comparable to the original quadric geometry model of PENELOPE. Coronary angiograms were simulated using a tessellated version of the NURBS-based Cardiac-Torso (NCAT) phantom. The phantom models 330 objects, comprised in total of 5 million triangles. The dose received by each organ and the contribution of the different scattering processes to the final image were studied in detail.
Currently, the most accurate measurement of the detector point response can be performed with the pinhole method. The small size of the pinhole however, severely reduces the x-ray intensity output, requiring long exposures, something that can potentially reduce the x-ray tube life-cycle. Even though deriving the 1D Line Response Function (LRF)of the detector using the edge method is much more effcient, the measurement process introduces a convolution with a line, in addition to the common pixel sampling, effectively broadening the LRF. We propose a practical method to recover the detector point response function by removing the effects of the line and the pixel from a set of Edge Response Function (ERF) measurements. We use the imaging equation to study the effects of the edge,line and pixel measurements, and derive an analytical formula for the recovered detector point response function based on a gaussian mixture model. The method allows for limited recovery of asymmetries in the detector response function. We verify the method with pinhole and edge measurements of a digital flat panel detector. Monte Carlo simulations are also performed, using the MANTIS x-ray and optical photon and electron transport simulation package, for comparison. We show that the standard LRF underestimates the detector when compared with the recovered response. Our simulation results suggest that both hole methods for estimating the detector response have limitations in that they cannot completely capture rotational asymmetries or other morphological details smaller than the detector pixel size.
Typical methods to measure the resolution properties of x-ray detectors use slit or edge devices. However,
complete models of imaging systems for system optimization require knowledge of the point-response function
of the detector. In this paper, we report on the experimental methods developed for the validation of the
point-response function of an indirect columnar CsI:Tl detector predicted by Monte Carlo using mantis. We
describe simulation results that replicate experimental resolution measurements using edge and pinhole devices.
The experimental setup consists of a high-resolution CCD camera with a 1-to-1fiber optic faceplate that allows
measurements for different scintillation screens. The results of these experiments and simulations constitute
a resource for the development and validation of the columnar models of phosphor screens proposed as part
of previous work with mantis. We compare experimental high-resolution pinhole responses of two different
CsI(Tl) screens to predictions from mantis. The simulated response matches reasonably well the measurements
at normal and off-normal x-ray incidence angle when a realistic pinhole is used in the simulation geometry. Our
results will be combined with results on Swank factors determined from Monte Carlo pulse-height spectra to
provide a comprehensive validation of the phosphor models, therefore allowing their use for in silico system
optimization.
We developed an algorithm based on a rule-based threshold framework to segment the coronary arteries from
angiographic computed tomography (CTA) data. Computerized segmentation of the coronary arteries is a
challenging procedure due to the presence of diverse anatomical structures surrounding the heart on cardiac
CTA data. The proposed algorithm incorporates various levels of image processing and organ information
including region, connectivity and morphology operations. It consists of three successive stages. The first stage
involves the extraction of the three-dimensional scaffold of the heart envelope. This stage is semiautomatic
requiring a reader to review the CTA scans and manually select points along the heart envelope in slices. These
points are further processed using a surface spline-fitting technique to automatically generate the heart envelope.
The second stage consists of segmenting the left heart chambers and coronary arteries using grayscale threshold,
size and connectivity criteria. This is followed by applying morphology operations to further detach the left and
right coronary arteries from the aorta. In the final stage, the 3D vessel tree is reconstructed and labeled using
an Isolated Connected Threshold technique. The algorithm was developed and tested on a patient coronary
artery CTA that was graciously shared by the Department of Radiology of the Massachusetts General Hospital.
The test showed that our method constantly segmented the vessels above 79% of the maximum gray-level and
automatically extracted 55 of the 58 coronary segments that can be seen on the CTA scan by a reader. These
results are an encouraging step toward our objective of generating high resolution models of the male and female
heart that will be subsequently used as phantoms for medical imaging system optimization studies.
Cardiovascular disease is considered the leading cause of death in the US, accounting for 38% of all deaths. There are gender differences in the size of coronary arteries and in the character and location of atherosclerotic lesions that affect the detection of coronary artery disease with the medical imaging modalities currently used (e.g. angiography, computed tomography). These differences also affect the safety and effectiveness of image-guided interventions using therapeutic devices. For the optimization of the medical imaging modalities used for this specific task we require the generation of clinically-realistic, gender-specific images of healthy and pathological coronary angiograms. For this purpose we have created a gender-specific statistical model of a pathological coronary artery tree. Starting from "healthy" heart-phantoms created from high resolution CT scans of cadaver hearts of both genders, the model uses prevalence data obtained from clinical studies of patients with significant (>50% stenosis) coronary artery disease (CAD). The model determines the plaque deposit locations and character (length, percent stenosis) for each case, based on a flow model. These data are then used to generate artificially diseased artery trees, embedded in a gender-specific torso model. Using an x-ray and optical photon Monte-Carlo simulation program, we then generate simulated angiograms exhibiting realistic disease patterns. The severity of each angiogram is determined from a set of rules that combines the geometrically increasing severity of lesions, the cumulative effects of multiple obstructions, the significance of their locations, the modifying influence of the collaterals, and the size and quality of the distal vessels. The simulated angiograms will consequently be read by model and human observers. The probability of detection derived in combination with the severity score will be used as a figure of merit for the patient- and gender-specific optimization of the imaging modality under investigation.
In previous work, we and others have focused the validation of
detector models for computational simulations of imaging system
performance on matching, to a small difference, specific aspects of
the detector's performance, i.e., modulation transfer function or
light output. In this work, instead, we selected three parameters
that, together, represent a more complete description of the imaging
properties of the phosphor screen to be modeled. The three
performance parameters are the information or Swank factor
determined from pulse-height spectra, the light output (either in
absolute or relative scale), and the point-response function. Using
this general methodology, we created screen models that
exhibit good agreement with recent experimental measurements
available in the literature, over a wide x-ray energy range (18-75
keV), and for front- and back-screen configurations. The models are
being used in conjunction with MANTIS, a Monte Carlo code for
simulating imaging systems that tracks x rays, electrons, and
optical photons in the same geometric model, with x-ray and electron
physics models from the PENELOPE package, and optical physics
models from DETECT-II. This study allows us to incorporate
realistic detector models into a detailed and complete Monte Carlo
simulation of the entire imaging system, including the object and
its absorbed dose map, and the properties of the imaging
acquisition.
Digital clinical imaging systems designed for radiography or cone-beam computed-tomography are highly shift-variant.
The x-ray cone angle of such systems varies between 0° and 15°, resulting in large variations of the focal spot projection
across the image field. Additionally, the variable x-ray beam incidence across the detector field creates a location-dependent
asymmetric detector response function. In this paper we propose a practical method for the measurement of
the angle of incidence dependent two-dimensional presampled detector response function. We also present a method for
the measurement of the source radiance at the center of the detector, and provide a geometric transformation for
reprojecting given any location in object space. The measurement procedure involves standard, readily available tools
such as a focal-spot/pinhole camera, and an edge. Using the measured data and a model based on smooth functions
derived from Monte Carlo simulations we obtain the location-dependent detector response function. In this paper we
ignore scatter, therefore the resulting location dependent system response is a function of the focal spot and detector
response. The system matrix, a representation of the full deterministic point response of the system for all positions in
object space, can then be calculated. The eigenvalues and eigenvectors of the system matrix are generated and
interpreted.
Standard objective parameters such as MTF, NPS, NEQ and DQE do not reflect complete system performance, because they do not account for geometric unsharpness due to finite focal spot size and scatter due to the patient. The inclusion of these factors led to the generalization of the objective quantities, termed GMTF, GNNPS, GNEQ and GDQE defined at the object plane. In this study, a commercial x-ray image intensifier (II) is evaluated under this generalized approach and compared with a high-resolution, ROI microangiographic system previously developed and evaluated by our group. The study was performed using clinically relevant spectra and simulated conditions for neurovascular angiography specific for each system. A head-equivalent phantom was used, and images were acquired from 60 to 100 kVp. A source to image distance of 100 cm (75 cm for the microangiographic system) and a focal spot of 0.6 mm were used. Effects of varying the irradiation field-size, the air-gaps, and the magnifications (1.1 to 1.3) were compared. A detailed comparison of all of the generalized parameters is presented for the two systems. The detector MTF for the microangiographic system is in general better than that for the II system. For the total x-ray imaging system, the GMTF and GDQE for the II are better at low spatial frequencies, whereas the microangiographic system performs substantially better at higher spatial frequencies. This generalized approach can be used to more realistically evaluate and compare total system performance leading to improved system designs tailored to the imaging task.
Under certain assumptions the detectability of the ideal observer can be defined as the integral of the system Noise Equivalent Quanta multiplied by the squared object spatial frequency distribution. Using the detector Noise-Equivalent-Quanta (NEQD) for the calculation of detectability inadequately describes the performance of an x-ray imaging system because it does not take into account the effects of patient scatter and geometric unsharpness. As a result, the ideal detectability index is overestimated, and hence the efficiency of the human observer in detecting objects is underestimated. We define a Generalized-NEQ (GNEQ) for an x-ray system referenced at the object plane that incorporates the scatter fraction, the spatial distributions of scatter and focal spot, the detector MTFD, and the detector Normalized-Noise-Power-Spectrum (NNPSD). This GNEQ was used in the definition of the ideal detectability for the evaluation of the human observer efficiency during a two Alternative Forced Choice (2-AFC) experiment, and was compared with the case where only the NEQD was used in the detectability calculations. The 2-AFC experiment involved the detection of images of polyethylene tubes (diameters between 100-300 um) filled with iodine contrast (concentrations between 0-120 mg/cm3) placed onto a uniform head equivalent phantom placed near the surface of a microangiographic detector (43 um pixel size). The resulting efficiency of the human observer without regarding the effects of scatter and geometric unsharpness was 30%. When these effects were considered the efficiency was increased to 70%. The ideal observer with the GNEQ can be a simple optimization method of a complete imaging system.
We study the properties of a new microangiographic system, consisting of a Region of Interest (ROI) microangiographic detector, x-ray source, and patient. The study was performed under conditions intended for clinical procedures such as neurological diagnostic angiograms as well as treatments of intracranial aneurysms, and vessel-stenoses. The study was performed in two steps; first a uniform head equivalent phantom was used as a “filter”. This allowed us to study the properties of the detector alone, under clinically relevant x-ray spectra. We report the detector MTF, NPS, NEQ, and DQE for beam energies ranging from 60-100kVp and for different detector entrance exposures. For the second step, the phantom was placed adjacent to the detector, allowing scatter to enter the detector and new measurements were obtained for the same beam energies and detector entrance exposures. Different radiation field sizes were studied, and the effects of different scatter amounts were investigated. The spatial distribution of scatter was studied using the edge-spread method and a generalized system MTF was obtained by combining the scatter MTF weighted by the scatter fraction with the detector MTF and focal spot unsharpness due to magnification. The NPS combined with the generalized MTF gave the generalized system NEQ and DQE. The generalized NEQ and the ideal object detectability were used to calculate the Dose Area Product to the patient for 75% object detection probability. This was used as a system optimization method.
New neuro-interventional devices such as stents require high spatial-resolution image guidance to enable accurate localization both along the vessel axis as well as in a preferred rotational orientation around the axis. A new high-resolution angiographic detector has been designed with capability for micro-angiography at rates exceeding the 5 fps of our current detector and, additionally, with noise low enough and gain high enough for fluoroscopy. Although the performance requirements are demanding and the detector must fit within practical clinical space constraints, image guidance is only needed within a approximately 5 cm region of interest at the site of the intervention. To achieve the design goals, the new detector is being assembled from available components which include a CsI(Tl) phosphor module coupled to a fiber-optic taper assembly with a two stage light image intensifier and a mirror between the output of the fiber taper and the input to a conventional high performance optical CCD camera. Resulting acquisition modes include 50-micron effective pixels at up to 30 fps with the capability to adjust sensitivity for both fluoroscopy and angiography. Estimates of signal at the various stages of detection are made with quantum accounting diagrams (QAD).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.