Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if
correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is
challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the
challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an
automated method for generating seed point location of both kidneys using anatomical location of kidneys
and bladder. The motivation for this work is based on the premise that the anatomical location of the
bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of
the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is
estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed
by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.
Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and
parameter selection leads to optimum smoothing that leads to a better quantification followed by correct
diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on
SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed
cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning
and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back
projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets
expectation maximization), on four gated cardiac patient projections (two patients with stress and rest
projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only
butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the
reconstructed volume and the variation observed in the angles with varying filter parameters is reported.
Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying
cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the
datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac
orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on
evaluating effect of low pass filter cutoff and order on cardiac orientation using three different
reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.
We present a method for design and use of a digital mouse phantom for small animal optical imaging. We map the boundary of a mouse model from magnetic resonance imaging (MRI) data through image processing algorithms and discretize the geometry by a finite element (FE) descriptor. We use a validated FE implementation of the three-dimensional (3-D) diffusion equation to model transport of near infrared (NIR) light in the phantom with a mesh resolution optimized for representative tissue optical properties on a computing system with 8-GB RAM. Our simulations demonstrate that a section of the mouse near the light source is adequate for optical system design and that the variation of intensity of light on the boundary is well within typical noise levels for up to 20% variation in optical properties and nodes used to model the boundary of the phantom. We illustrate the use of the phantom in setting goals for specific binding of targeted exogenous fluorescent contrasts based on anatomical location by simulating a nearly tenfold change in the detectability of a 2-mm-deep target depending on its placement. The methodology described is sufficiently general and may be extended to generate digital phantoms for designing clinical optical imaging systems.
We propose Compressed Connected Components (CxCxC), a new fast algorithm for labeling connected components in binary images making use of compression. We break the given 3D image into non-overlapping 2x2x2 cube of voxels (2x2 square of pixels for 2D) and encode these binary values as the bits of a single decimal integer.
We perform the connected component labeling on the resulting compressed data set. A recursive labeling approach by the use of smart-masks on the encoded decimal values is performed. The output is finally decompressed back to the original size by decimal-to-binary conversion of the cubes to retrieve the connected components in a lossless fashion. We demonstrate the efficacy of such encoding and labeling for large data sets (up to 1392 x 1040 for 2D and
512 x 512 x 336 for 3D). CxCxC reports a speed gain of 4x for 2D and 12x for 3D with memory savings of 75% for 2D and 88% for 3D over conventional (recursive growing of component labels) connected components algorithm.
We also compare our method with those of VTK and ITK and find that we outperform both with speed gains of 3x and 6x for 3D. These features make CxCxC highly suitable for medical imaging and multi-media applications where the size of data sets and the number of connected components can be very large.