We present a method for design and use of a digital mouse phantom for small animal optical imaging. We map the boundary of a mouse model from magnetic resonance imaging (MRI) data through image processing algorithms and discretize the geometry by a finite element (FE) descriptor. We use a validated FE implementation of the three-dimensional (3-D) diffusion equation to model transport of near infrared (NIR) light in the phantom with a mesh resolution optimized for representative tissue optical properties on a computing system with 8-GB RAM. Our simulations demonstrate that a section of the mouse near the light source is adequate for optical system design and that the variation of intensity of light on the boundary is well within typical noise levels for up to 20% variation in optical properties and nodes used to model the boundary of the phantom. We illustrate the use of the phantom in setting goals for specific binding of targeted exogenous fluorescent contrasts based on anatomical location by simulating a nearly tenfold change in the detectability of a 2-mm-deep target depending on its placement. The methodology described is sufficiently general and may be extended to generate digital phantoms for designing clinical optical imaging systems.
An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems.
Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain
viable within 3D environments. In order to establish this a new user interface was created that applied
various understood principles of interface design. A user study was then performed where it was compared
with an earlier interface for a series of medical visualization tasks.
As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of
how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality
systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is
not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality
environment can be visualized and how this can allow us to gain greater insight into the process of
interaction/learning in these systems. Also explored is the possibility of using this method to improve
understanding and management of ergonomic issues within an interface.
3-D analysis of blood vessels from volumetric CT and MR datasets has many applications ranging from examination of pathologies such as aneurysm and calcification to measurement of cross-sections for therapy planning. Segmentation of the vascular structures followed by tracking is an important processing step towards automating the 3-D vessel analysis workflow. This paper demonstrates a fast and automated algorithm for tracking the major arterial structures that have been previously segmented. Our algorithm uses anatomical
knowledge to identify the start and end points in the vessel structure that allows automation. Voxel coding scheme is used to code every voxel in the vessel based on its geodesic distance from the start point. A shortest path based iterative region growing is used to extract the vessel tracks that are subsequently smoothed using an active contour method. The algorithm also has the ability to automatically detect bifurcation points of major arteries. Results are shown for tracking the major arteries such as the common carotid, internal carotid, vertebrals, and arteries coming off the Circle of Willis across multiple cases with various data related
and pathological challenges from 7 CTA cases and 2 MR Time of Flight (TOF) cases.
In this paper, we present a framework that one could use to set optimized parameter values, while performing
image registration using mutual information as a metric to be maximized. Our experiment details these steps
for the registration of X-ray Computer Tomography (CT) images with Positron Emission Tomography (PET)
images. Selection of different parameters that influence the mutual information between two images is crucial
for both accuracy and speed of registration. These implementation issues need to be handled in an orderly
fashion by designing experiments in their operating ranges. The conclusions from this study seem vital towards
obtaining allowable parameter range for a fusion software.
Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical
complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: “proximal”, “middle”, and “distal”. The “proximal” and “distal” sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the “middle” partition that
remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified
visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing
the “proximal” and “distal” partitions. Complex methods are restricted to only the “middle” partition. The partitionenabled
segmentation has been successfully tested and results are shown from multiple cases.