By the time most retinal diseases are diagnosed, macroscopic irreversible cellular loss has already occurred. Earlier detection of subtle structural changes at the single photoreceptor level is now possible, using the adaptive optics scanning light ophthalmoscope (AOSLO). This work aims to develop a fully automatic segmentation framework to extract cell boundaries from non-confocal split-detection AOSLO images of the cone photoreceptor mosaic in the living human eye. Significant challenges include anisotropy, heterogeneous cell regions arising from shading effects, and low contrast between cells and background. To overcome these challenges, we propose the use of: 1) multi-scale Hessian response to detect heterogeneous cell regions, 2) convex hulls to create boundary templates, and 3) circularlyconstrained geodesic active contours to refine cell boundaries. We acquired images from three healthy subjects at eccentric retinal regions and manually contoured cells to generate ground-truth for evaluating segmentation accuracy. Dice coefficient, relative absolute area difference, and average contour distance were 82±2%, 11±6%, and 2.0±0.2 pixels (Mean±SD), respectively. We find that strong shading effects from vessels are a main factor that causes cell oversegmentation and false segmentation of non-cell regions. Our segmentation algorithm can automatically and accurately segment photoreceptor cells on non-confocal AOSLO images, which is the first step in longitudinal tracking of cellular changes in the individual eye over the time course of disease progression.
Cone photoreceptors are highly specialized cells responsible for the origin of vision in the human eye. Their inner segments can be noninvasively visualized using adaptive optics scanning light ophthalmoscopes (AOSLOs) with nonconfocal split detection capabilities. Monitoring the number of cones can lead to more precise metrics for real-time diagnosis and assessment of disease progression. Cell identification in split detection AOSLO images is hindered by cell regions with heterogeneous intensity arising from shadowing effects and low contrast boundaries due to overlying blood vessels. Here, we present a multi-scale circular voting approach to overcome these challenges through the novel combination of: 1) iterative circular voting to identify candidate cells based on their circular structures, 2) a multi-scale strategy to identify the optimal circular voting response, and 3) clustering to improve robustness while removing false positives. We acquired images from three healthy subjects at various locations on the retina and manually labeled cell locations to create ground-truth for evaluating the detection accuracy. The images span a large range of cell densities. The overall recall, precision, and F1 score were 91±4%, 84±10%, and 87±7% (Mean±SD). Results showed that our method for the identification of cone photoreceptor inner segments performs well even with low contrast cell boundaries and vessel obscuration. These encouraging results demonstrate that the proposed approach can robustly and accurately identify cells in split detection AOSLO images.
Renal calculi are one of the most painful urologic disorders causing 3 million treatments per year in the United States.
The objective of this paper is the automated detection of renal calculi from CT colonography (CTC) images on which
they are one of the major extracolonic findings. However, the primary purpose of the CTC protocols is not for the
detection of renal calculi, but for screening of colon cancer. The kidneys are imaged with significant amounts of noise in the non-contrast CTC images, which makes the detection of renal calculi extremely challenging. We propose a
computer-aided diagnosis method to detect renal calculi in CTC images. It is built on three novel techniques: 1) total
variation (TV) flow to reduce image noise while keeping calculi, 2) maximally stable extremal region (MSER) features
to find calculus candidates, 3) salient feature descriptors based on intensity properties to train a support vector machine classifier and filter false positives. We selected 23 CTC cases with 36 renal calculi to analyze the detection algorithm. The calculus size ranged from 1.0mm to 6.8mm. Fifteen cases were selected as the training dataset, and the remaining eight cases were used for the testing dataset. The area under the receiver operating characteristic curve (AUC) values were 0.92 in the training datasets and 0.93 in the testing datasets. The testing dataset confidence interval for AUC reported by ROCKIT was [0.8799, 0.9591] and the training dataset was [0.8974, 0.9642]. These encouraging results demonstrated that our detection algorithm can robustly and accurately identify renal calculi from CTC images.
CT colonography (CTC) can increase the chance of detecting high-risk lesions not only within the colon but anywhere in the abdomen with a low cost. Extracolonic findings such as calculi and masses are frequently found in the kidneys on CTC. Accurate kidney segmentation is an important step to detect extracolonic findings in the kidneys. However, noncontrast CTC images make the task of kidney segmentation substantially challenging because the intensity values of kidney parenchyma are similar to those of adjacent structures. In this paper, we present a fully automatic kidney
segmentation algorithm to support extracolonic diagnosis from CTC data. It is built upon three major contributions: 1)
localize kidney search regions by exploiting the segmented liver and spleen as well as body symmetry; 2) construct a
probabilistic shape prior handling the issue of kidney touching other organs; 3) employ efficient belief propagation on the shape prior to extract the kidneys. We evaluated the accuracy of our algorithm on five non-contrast CTC datasets with manual kidney segmentation as the ground-truth. The Dice volume overlaps were 88%/89%, the root-mean-squared errors were 3.4 mm/2.8 mm, and the average surface distances were 2.1 mm/1.9 mm for the left/right kidney respectively. We also validated the robustness on 27 additional CTC cases, and 23 datasets were successfully segmented. In four problematic cases, the segmentation of the left kidney failed due to problems with the spleen segmentation. The results demonstrated that the proposed algorithm could automatically and accurately segment kidneys from CTC images, given the prior correct segmentation of the liver and spleen.
Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.
Phantom experiments are useful and frequently used in validating algorithms or techniques in applications where
it is difficult or impossible to generate accurate ground-truth. In this work we present a phantom design and
experiments to validate our colonoscopy tracking algorithms, that serve to keep both virtual colonoscopy and
optical colonoscopy images aligned (in location and orientation). We describe the construction of two phantoms,
capable of respectively moving along a straight and a curved path. The phantoms are motorized so as to be
able to move at a near constant speed. Experiments were performed at three speeds: 10, 15 and 20mm/sec, to
simulate motion velocities during colonoscopy procedures. The average velocity error was within 3mm/sec in
both straight and curved phantoms. Displacement error was within 7mm over a total distance of 288mm in the
straight phantom, and less than 7mm over 287mm in the curved phantom. Multiple trials were performed of
each experiment(and their errors averaged) to ensure repeatability.
The simultaneous use of pre-segmented CT colonoscopy images and optical colonoscopy images during routine
endoscopic procedures provides useful clinical information to the gastroenterologist. Blurry images in the video
stream can cause the tracking system to fail during the procedure, due to the endoscope touching the colon wall
or a polyp. The ability to recover from such failures is necessary to continually track images, and goes towards
building a robust tracking system. Identifying similar images before and after the blurry sequence is central to
In this work, we propose a Temporal Volume Flow(TVF) based approach to search for a similar image
pair before and after blurry sequences in the optical colonoscopy video. TVF employs nonlinear intensity and
gradient constancy models, as well as a discontinuity-preserving smoothness constraint to formulate an energy
function; minimizing this function between two temporal volumes before and after the blurry sequence results
in an estimate of TVF. A voting approach is then used to determine an image pair with the maximum number
of point correspondences. Region flow algorithm10 is applied to the selected image pair to determine camera
We applied our algorithm to three optical colonoscopy sequences. The first sequence had 235 images in
the ascending colon, and 12 blurry images. The image pair selected by TVF decreases the rotation error of
the tracking results using the region flow algorithm. Similar results were observed in the second patient in the
descending colon, containing 535 images and 24 blurry images. The third sequence contained 580 images in the
descending colon and 172 blurry images. Region flow method failed in this case due to improper image pair
selection; using TVF to determine the image pair allowed the system to successfully recover from the blurry
In this work, we propose new automation tools to process 2D building geometry data for effective communication
and timely response to critical events in commercial buildings. Given the scale and complexity of commercial
buildings, robust and visually rich tools are needed during an emergency. Our data processing pipeline consists of
three major components, (1) adjacency graph construction, representing spatial relationships within a building
(between hallways, offices, stairways, elevators), (2) identification of elements involved in evacuation routes
(hallways, stairways), (3) 3D building network construction, by connecting the oor elements via stairways and
elevators. We have used these tools to process a cluster of five academic buildings. Our automation tools (despite
some needed manual processing) show a significant advantage over manual processing (a few minutes vs. 2-4
hours). Designed as a client-server model, our system supports analytical capabilities to determine dynamic
routing within a building under constraints(parts of the building blocked during emergencies, for instance).
Visualization capabilities are provided for easy interaction with the system, on both desktop (command post)
stations as well as mobile hand-held devices, simulating a command post-responder scenario.
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information
during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to
compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding
patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum
of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the
improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion
parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution
of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking
results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates
better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments
demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the
ascending colon, and 410 to 1316 in the transverse colon.
Extraction of centerlines is useful to analyzing objects in medical images, such as lung, bronchia, blood vessels,
and colon. Given the noise and other imaging artifacts that are present in medical images, it is crucial to use
robust algorithms that are (1) noise tolerant, (2) computationally efficient, (3) accurate and (4) preferably, do
not require an accurate segmentation and can directly operate on grayscale data. We propose a new centerline
extraction method that employs a Gaussian type probability model to build a more robust distance field. The
model is computed using an integration of the image gradient field, in order to estimate boundaries of interest.
Probabilities assigned to boundary voxels are then used to compute a modified distance field. Standard distance
field algorithms are then applied to extract the centerline. We illustrate the accuracy and robustness of our
algorithm on a synthetically generated example volume and a radiologist supervised segmented head MRT
angiography dataset with significant amounts of Gaussian noise, as well as on three publicly available medical
volume datasets. Comparison to traditional distance field algorithms is also presented.