PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Our group has recently obtained data based upon whole- mounted step-sectioned radical prostatectomy specimens using a 3D computer assisted prostate biopsy simulator that suggests an increased detection rate is possible using laterally placed biopsies. A new 10-core biopsy pattern was demonstrated to be superior to the traditional sextant biopsy. This patter includes the traditional sextant biopsy cores and four laterally placed biopsies in the right and left apex and mid portion of the prostate gland. The objective of this study is to confirm the higher prostate cancer defection rate obtained using our simulated 10-core biopsy pattern in a small clinical trial. We retrospectively reviewed 35 consecutive patients with a pathologic diagnosis of prostate cancer biopsied by a single urologist using the 10-core prostate biopsy patterns were compared with respect to prostate cancer detection rate. Of the 35 patients diagnosed with prostate cancer, 54.3 percent were diagnosed when reviewing the sextant biopsy data only. Review of the 10-core pattern revealed that an additional 45.7 percent were diagnosed when reviewing the sextant biopsy data only. Review of the 10-core pattern revealed that an additional 45.7 percent of patients were diagnosed solely with the laterally placed biopsies. Our results suggest that biopsy protocols that use laterally placed biopsies based upon a five region anatomical model are superior to the routinely used sextant prostate biopsy pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate assessment of pathological conditions in the prostate is difficult. Screening methods include palpation if the prostate gland, blood chemical testing, and diagnostic imaging. Trans-rectal Ultrasound (TRUS) is commonly used for the assessment of pathological conditions, however, TRUS is severely constrained by the relative distal location of the imaging probe. Trans-urethral Ultrasound (TUUS) may overcome some limitations of TRUS. A TUUS catheter was used to image the prostate, rectum, bladder, ureter, neuro-vascular bundles, arteries, and surrounding tissue. In addition, 360 degrees rotational scans were recorded for reconstruction into 3D volumes. Segmentation was challenging, however, new techniques such as active contour methods show potential. 3D visualizations, including both volume and surface rendering, were provided to clinicians off-line. On-line 3D visualization techniques are currently being developed. Potential applications of TUUS include: prostate cancer diagnosis and staging as well as image guided biopsy and therapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a technique that estimates the 3D orientations and position of thin objects using the information from a single projection image and a known model of the object. The 3D orientations and positions of catheters were determined by use of this technique, which iteratively aligns points in the model with their respective image positions. Studies were done to evaluate the sensitivity of the technique to errors in the model and image data. These studies included generating image and model data and adding Gaussian-distributed errors to these data, adding Gaussian distributed pixel value nose to x-ray images of a catheter phantom to simulate noisy fluoroscopic images, and simulating flexes, or bends, in the tip of a catheter. Results indicate that orientations and positions of a catheter of diameter 0.18 cm can be calculated by use of the technique with mean accuracies of approximately 1 degree and 0.3 cm, respectively, for errors of approximately 0.03 cm and 0.005 cm in the image and model data, respectively. Results also indicate that the technique is robust to typical levels of quantum noise in fluoroscopic images, and may be made robust to changes in the catheter's shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional localization schemes for brachytherapy seed implants using biplane or stereoscopic projection radio- graphs can suffer form scaling distortions and poor visibility of implanted seeds, resulting in compromised source tracking and dosimetric inaccuracies. This paper proposes an alternative method for improving the visualization and thus, localization, of radiotherapy implants by synthesizing, form as few as two radiographic projections, a 3D image free of divergence artifacts. The result produces more accurate seed localization leading to improved dosimetric accuracy. Two alternative approaches are compared. The first uses orthogonal merging. The second employs the technique of tuned-aperture computed tomography (TACT), whereby 3D reconstruction is performed by shifting and adding of well-sampled projections relative to a fiducial reference system. Phantom results using nonlinear visualization methods demonstrate the applicability of localizing individual seeds for both approaches. Geometric errors are eliminated by a calibration scheme derived from the fiducial pattern that is imaged concurrently with the subject. Both merging and TACT approaches enhance seed localization by improving visualization of the seed distribution over biplanar radiographs. Unlike current methods, both alternatives demonstrate continuos one-to-one source tracking in 3D, although elimination of scaling artifacts requires more than two projections when using the merging method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereotactic neuronavigational systems use pre-operatively acquired 3D images for procedural planning and also are employed in intraoperative navigations to help localize and resect brain lesions. Intraoperatively, multiple factors contribute to anatomic changes that limit the accuracy of navigation based solely on pre-operative images. Loss of CSF, cortical swelling, and the effect of gravity relative to the craniotomy locations are some of the factors that contribute to errors in navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volume rendering is now a common tool for multi-dimensional data exploration in biology, medicine, meteorology, geology, material science, and other fields. In order to perform volume rendering, users are often forced to preprocess and segment their data. This step of processing before visualization often inhibits the use of volume rendering gas it can be quite cumbersome and can also introduce undesirable artifacts. In order to enhance the use of direct volume visualization, powerful, yet easy-to-use methods need to be developed. In this paper, we present an approach that offers the user data-dependent control over the focal region of the visualization. This approach enables the user to easily visualize interior structures in the dataset by controlling physically defined parameters, without performing segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces NeuroBase, an atlas-assisted neuroimaging system. NeuroBase is a flexible, affordable and cross-platform system capable to process multiple datasets. The design, functionality and numerous tools of NeuroBase are presented. Two novel paradigms are introduced here: warping symmetry with respect to any reference dataset or atlas, and triplanar mosaic presentation. We also report our preliminary experience in the use of NeuroBase for various applications including neuro education, neuro radiology, brain mapping, stererotactic functional neurosurgery and multi-modal visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The visualization of brain vessels on the cortex helps the neurosurgeon in two ways: to avoid blood vessels when specifying the trepanation entry, and to overcome errors in the surgical navigation system due to brain shift. We compared 3D T1, MR, 3D T1 MR with gadolinium contrast, MR venography as scanning techniques, mutual information as registration technique, and thresholding and multi-vessel enhancement as image processing techniques. We evaluated the volume rendered results based on their quality and correspondence with photos took during surgery. It appears that with 3D T1 MR scans, gadolinium is required to show cortical veins. The visibility of small cortical veins is strongly enhanced by subtracting a 3D T1 MR baseline scan, which should be registered to the scan with gadolinium contrast, even when the scans are made during the same session. Multi-vessel enhancement helps to clarify the view on small vessels by reducing noise level, but strikingly does not reveal more. MR venography does show intracerebral veins with high detail, but is, as is, unsuited to show cortical veins due to the low contrast with CSF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the 1999 meeting of SPIE Medical Imaging, we presented results that demonstrated that a software-only-based technique, Shell Rendering, was capable of rendering hard surfaces at rates faster than hardware-assisted, triangle- based surface rendering by a factor of 18 to 31 while maintaining comparable image quality. We noted that the framework of Shell Rendering actually encompasses both hard surface rendering of medical image data via Shell Rendering in software with two techniques of OpenGL-based hardware- assisted volume rendering using 2D and 3D texture mapping. Although our previous results demonstrated that shell rendering is faster than hardware-assisted surface rendering, its speed is comparable to but slightly lower than hardware-assisted volume rendering methods. Detailed timing results for various input medical image data sets as well as for various computer platforms are presented in the paper. The images produced by the various methods are presented as well for subjective quality assessment. We conclude that for medical image visualization, Shell Rendering is an efficient technique that combines aspects of surface rendering and volume rendering. Furthermore, it accomplishes this without requiring expensive hardware while producing high quality images on PCs using an entirely software-based approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-resolution 3D volumetric images obtained by today's radiologic imaging scanners are rich in detailed diagnostic information. Despite the many visualization techniques available to assess such images, there remains information that is challenging to uncover, such as the location of small structures. Recently, sliding thin-slab (STS) visualization was proposed to improve the visualization of interior structures. These STS techniques and the other existing techniques, involve considerable computation, significant memory, extra processing, and dependence on user specifications. Further, other effective rendering approaches are conceivable using the general STS mechanism. We introduce two fast direct techniques for STS volume visualization. The first, a depth rendering process, produces an unobstructed, high-contrast 3D view of the information within a thin volume of image data. Results are a function of relative planar locations, not individual voxel values. Thus, rendered views accurately depict the internal properties that were initially captured as position and intensity. The second method produces a gradient-like view of the intensity changes in a thin volume. Results can effectively detect the occurrence and location of dramatic tissue variations, often not visually recognized otherwise. Both STS techniques exploit the concept of temporal coherence to form sequences of consecutive slabs, using information from previously computed slabs. This permits rapid real-time computational on a general-purpose computer. Further, these techniques require minimal processing and memory, require no pre-processing, and results are not dependent on user knowledge. Results show the computational efficiency and visual efficacy of these new STS techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While the importance of proper segmentation or registration for the results of volume visualization is well-known, the influence of the actual visualization steps has found only little attention so far. From a signal processing point of view, a typical volume visualization pipeline is mostly a sequence of sampling and convolution operations. In this paper, we are interested how image quality in terms of blurring and aliasing artifacts depends on the choice of methods and parameters for these steps. For this purpose, signal transfer from tomographic image acquisition to ray casting visualization is analyzed in frequency space. Deterioration of a test signal along the steps of the pipeline is measured in terms of remaining useful energy and introduced aliasing energy. Effects of changing variables, such as slice thickness, interpolation method, or sampling density on the ray, are experimentally investigated. Quantitative results are related to visible effects, using simulated test data. The presented method may be used to optimize the volume visualization pipeline. Vice versa, for a 3D image with known processing parameters, it may be estimated whether structures of interest will be visible, smoothed out, or hidden among aliasing artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volume rendering is a visualizing technique for 3D volume data. The pixel values of a rendered image are determined by accumulating sampled values form volume data. Usually, the product of opacity and shading values is used as a sampled value. However, the size of the volume data is usually too big to handle in real time. Therefore, a control measure, which changes the level of detail (LOD) of the rendered image, may be introduced for obtaining a reasonable rendering speed. In this paper, we introduce a new criterion for controlling the LOD of the rendered image, and a new octree-based rendering method using this criterion efficiently. As the new criterion, the variance of the opacity and normal vector product is adopted and used to classify volume blocks into an octree structure. In the rendering stage, normal blocks are rendered by using the shear-warp factorization and single-valued blocks by using a template, while zero blocks are skipped. By performing in this fashion, the prosed scheme can reduce the overall rendering time. The scheme is evaluated by rendering a skull volume data ste obtained from an x-ray CT system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To reveal the spatial pattern of localized prostate cancer distribution, a 3D statistical volumetric model, showing the probability map of prostate cancer distribution, together with the anatomical structure of the prostate, has been developed from 90 digitally-imaged surgical specimens. Through an enhanced virtual environment with various visualization modes, this master model permits for the first time an accurate characterization and understanding of prostate cancer distribution patterns. The construction of the statistical volumetric model is characterized by mapping all of the individual models onto a generic prostate site model, in which a self-organizing scheme is used to decompose a group of contours representing multifold tumors into localized tumor elements. Next crucial step of creating the master model is the development of an accurate multi- object and non-rigid registration/warping scheme incorporating various variations among these individual moles in true 3D. This is achieved with a multi-object based principle-axis alignment followed by an affine transform, and further fine-tuned by a thin-plate spline interpolation driven by the surface based deformable warping dynamics. Based on the accurately mapped tumor distribution, a standard finite normal mixture is used to model the cancer volumetric distribution statistics, whose parameters are estimated using both the K-means and expectation- maximization algorithms under the information theoretic criteria. Given the desired number of tissue samplings, the prostate needle biopsy site selection is optimized through a probabilistic self-organizing map thus achieving a maximum likelihood of cancer detection. We describe the details of our theory and methodology, and report our pilot results and evaluation of the effectiveness of the algorithm in characterizing prostate cancer distributions and optimizing needle biopsy techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of surgical procedures are planned and executed based on medical images. Typically, x-ray computed tomography (CT) and magnetic resonance (MR) images are acquired preoperatively for diagnosis and surgical planning. In the operating room, execution of the surgical plan becomes feasible due to registration between preoperative images and surgical space where patient anatomy lies. In this paper, we present a new automatic algorithm where we use ultrasound (US) 2D B-mode images to register the preoperative MR image coordinate system with the surgical space which in our experiments is represented by the reference coordinate system of a DC magnetic position sensor. The position sensor is also used for tracking the position and orientation of the US images. Furthermore, we simulated patient anatomy by using custom-built phantoms. Our registration algorithm is a hybrid between fiducial- based and image-based registration algorithms. Initially, we perform a fiducial-based rigid-body registration between MR and position sensor space. Then, by changing various parameters of the rigid-body fiducial-based transformation, we produce an MR-sensor misregistration in order to simulate potential movements of the skin fiducials and/or the organs. The perturbed transformation serves as the initial estimate for the image-based registration algorithm, which uses normalized mutual information as a similarity measure, where one or more US images of the phantom are automatically matched with the MR image data set. By using the fiducial- based registration as the gold standard, we could compute the accuracy of the image-based registration algorithm in registering MR and sensor spaces. The registration error varied depending on the number of 2D US images used for registration. A good compromise between accuracy and computation time was the use of 3 US slices. In this case, the registration error had a mean value of 1.88 mm and standard deviation of 0.42 mm, whereas the required computation time was approximately 52 sec. Subsampling the US data by a factor of 4 X 4 and reducing the number of histogram bins to 128 reduced the computation time to approximately 6 sec. with a small increase in the registration error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a software tool for interactive visualization and 4D segmentation of multiple sets of images. The segmentation process uses a predefined anatomical template of the structure of interest represented as a polygonal mesh in 3D. This can be obtained from a library of normal or diseased anatomies, or if available, a surface generated from the patient's previous studies can be used. The user then deforms the template so that it correctly delineates the region of interest in the underlying images. These deformations can be constrained to maintain spatial and temporal smoothness as is expected in the underlying anatomy. A unique feature of this analysis package is that multiple non-coplanar image sets can be used concurrently to generate accurate contours. This feature is particularly useful in contouring long axis and short axis images of the heart simultaneously. By generating a reliable segmentation from a substrate of images in space and time, we can automatically contour the structure in the remaining images through appropriate interpolation, and thereby significantly reduce the total segmentation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Registration of mammograms is frequently used in computer- aided-detection algorithms, and has been considered for use in the analysis of temporal sequences of screening exams. Previous image registration methods, employing affine transformations or Procrustean transforms based ona small number of fiducial points, have not proven to be entirely adequate. A significantly improved method to facilitate the display and analysis of temporal sequences of mammograms by optimizing image registration and grayscales, has been developed. This involves a fully automatic nonlinear geometric transformation, which puts corresponding skin lines, nipples and chest walls in registration and locally corrects pixel values based on the Jacobian of the transformation. Linear regression is applied between pairs of corresponding pixels after registration, and the derived regression equation is used to equalize grayscales. Although the geometric transformation is not able to correct interior tissue patterns for gross differences in the angle of view or differences resulting from skewing of the breast tissue parallel to the detector, sequences studied have been sufficiently consistent that typically only about 30 percent of images in a sequence are considered to be seriously incompatible with the remaining images. These methods clearly demonstrate a significant benefit for the display and analysis of sequences of digital mammograms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an image database system with the capability to locate specified object-level merge and separation events in a sequence of time-lapse images. Specifically, the objects of interest are live cells in phase contrast images acquired by scanning cytometry. The system is named TERSIS and it resides on a workstation accessing time lapse images on CD- ROM. The cell objects are segmented and the resulting data are processed to extract a time series and its time derivative series for each spatial feature. Cell objects are tracked through the image sequence by applying similarity metrics to the cell object feature vectors, and cell merge and separation events are located using global image statistics. Multiple hypotheses are generated and scored to determine participating cell objects in merge/separation events. The cell association and time-varying spatial data re stored in a database. A graphical suer interface provides the user with tools to specify queries for specific cellular states and events for recall and display. Primary limitation include the need for an automatic front-end segmenter and increased cell tracking volume. The design of this system is extensible to other object types and forms of sequential image input, including video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prostate brachytherapy is an effective treatment for localized prostate cancer. It has been recently shown that prior to surgery a transrectal ultrasound (TRUS) study of the prostate and pubic arch can effectively assess pubic arch interference (PAI), a major stumbling block for the brachytherapy procedure. This study involved the use of digital images acquired directly from an ultrasound (US) machine. However, not all US machines allow the saving or transfer of digital image data. To allow PAI assessment regardless of US platform, we need to digitize the images from the US video output when there is no direct digital saving/transfer capability. During video digitization, the internal digital images go through a D/A and A/D conversion process, introducing quantization errors and other noises. To show that PAI assessment is still viable on the digitized images, we evaluated and refined a PAI assessment algorithm to predict the location of the pubic arch on both digital images and those captured after digitization. These predicted arch locations were statistically compared to each other. Based on the results from 33 patients, we conclude that the ability to assess PAI from TRUS images is not affected by video digitization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling of moving anatomic structures is complicated by the complexity of motion intrinsic and extrinsic to the structures. However when motion is cyclical, such as in heart, effective dynamic modeling can be approached using modern fast imaging techniques, which provide 3D structural data. Data may be acquired as a sequence of 3D volume images throughout the cardiac cycle. To model the intricate non- linear motion of the heart, we created a physics-based surface model which can realistically deform between successive time points in the cardiac cycle, yielding a dynamic 4D model of cardiac motion. Sequences of fifteen 3D volume images of intact canine beating hearts were acquired during compete cardiac cycles using the Dynamic Spatial Reconstructor and the Electron Beam CT. The chambers of the heart were segmented at successive time points, typically at 1/15-second intervals. The left ventricle of the first item point was reconstructed as an initial triangular mesh. A mass-spring physics-based deformable model, which can expand and shrink with local contraction and stretching forces distributed in an anatomically accurate simulation of cardiac motion, was applied to the initial mesh and allowed the initial mesh to deform to fit the left ventricle in successive time increments of the sequence. The resultant 4D model can be interactively transformed and displayed with associated regional electrical activity mapped onto the anatomic surfaces, producing a 5D mode, which faithfully exhibits regional cardiac contraction and relaxation patterns over the entire heart. For acquisition systems that may provide only limited 4D data, the model can provide interpolated anatomic shape between time points. This physics-based deformable model accurately represents dynamic cardiac structural changes throughout the cardiac cycle. Such models provides the framework for minimizing the number of time points required to usefully depict regional motion of myocardium and allowing quantitative assessment of regional myocardial dynamics. The electrical activation mapping provides spatial and temporal correlation within the cardiac cycle. In procedures such as intra-cardiac catheter ablation, visualization of the dynamic mole can be used to accurately localize the foci of myocardial arrhythmias and guide positioning of catheters for effective ablation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spine biopsy is a useful tool to detect and verify spine tumor. We have developed a spine-biopsy simulation system, which provides realistic visual and force feedback information to a user. In this paper, we present an algorithm to combine a volume-rendered image and a surface- rendered needle image. The volume rendering requires a larger amount of computations and memory space. In general, the motion of a needle is fast so that image update should be performed frequently, whereas the viewing direction is not changed so fast in visualization, so that it is not necessary to update the volume rendered organ image frequently. The realization of the medical instruments can be performed by surface rendering rather than volume rendering to visualize the fast motion of the medical instruments. Surface rendering is an efficient method to render a simple structure with less computation and memory requires. Fast combination of volume-rendered and surface- rendered images is performed with high quality realization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When the focus of epilepsy is so deep that skin EEG electrodes do not give enough accuracy in calculating the position of the focus, it may be decided to surgically implant EEG electrodes inside the patient's head. To localize these electrodes, a high resolution CT scan is made of the patients' head. As manual tracking of the electrodes slice by slice is confusing and erroneous, a virtual reality environment has been created to give the radiologist a view from inside the patient's skull. With the help of a high quality but fast volume renderer, the radiologist can get an overview of electrode grids and can interactively characterize the grid contacts of interest;. For the localization of the contracts, we compared manual placement, center of gravity and Gaussian template matching. It appeared that the grid contacts could be characterized with an accuracy of 0.5 mm, that manual positioning and template matching with a Gaussian with flexible sizes clearly outperformed center of gravity and template matching with an isotropic Gaussian. The reason is that although the contacts are clearly visible in a CT, their small dimensions, and proximity to skull and metal wires, makes them more difficult to characterize fully automatically than commonly expected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of computers for information processing has become a key component of most businesses. The practice of medicine has been notably slow to adopt and integrate these new technologies, and is still largely a 'pen and paper' process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Endosteal implants facilitate obturator prosthesis fixation in tumor patients after maxillectomy. Previous clinical studies shown however, that survival of implants placed into available bone after maxillectomy is generally poor. Implants positioned optimally in residual zygomatic bone provide superior stability form a biomechanical point of view as well as improved survival. In a pilot study, we have assessed the precision of VISIT, a surgical navigation system developed for research purposes at our institution. VISIT is based on the AVW-library and a number of in-house developed algorithms for communication with an optical tracker and patient-to-CT-registration. The final platform independent application was assembled within 6 man-months using ANSI-C and Tcl/Tk. Five cadaver specimens underwent hemimaxillectomy. The cadaver head was matched to a preoperative high resolution CT by using implanted surgical microscrews as fiducial markers. The position of a surgical drill relative to the cadaver head was determined with an optical tracking system. Implants were placed into the zygomatic arch where maximum bone volume was available. The results were assessed using test for allocation accuracy and postoperative CT-scans of the cadaver specimens. The average allocation accuracy of landmarks on the bony skull was 0.6 +/- 0.3 mm determined with a 5 degree-of-freedom pointer probe. The allocation accuracy of the tip of the implant burr was 1.7 +/- 0.4 mm. The accuracy of the implant position compared to the planned position was 1.5 +/- 1.1 mm. 8 out of 10 implants were inserted with maximum contact to surrounding bone, two implants were located unfavorably. However, reliable placement of implants in this region is difficult to achieve. The techqni3u described in this paper may be very helpful in the management of patients after maxillary resection without sufficient retention for obturator prostheses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual servoing is well established in the field of industrial robotics, when using CCD cameras. This paper describes one of the first medical implementations of uncalibrated visual servoing. To our knowledge, this is the first time that visual servoing is done using x-ray fluoroscopy. In this paper we present a new image based approach for semi-automatically guidance of a needle or surgical tool during percutaneous procedures and is based on a series of granted and pending US patent applications. It is a simple and accurate method which requires no prior calibration or registration. Therefore, no additional sensors, no stererotactic frame and no additional calibration phantom is needed. Our techniques provides accurate 3D alignment of the tool with respect to an anatomic target and estimates the required insertion depth. We implemented and verified this method with three different medical robots at the Computer Integrated Surgery (CIS) Lab at the Johns Hopkins University. First tests were performed using a CCD-camera and a mobile uniplanar x-ray fluoroscope as imaging modality. We used small metal balls of 4 mm in diameter as target points. These targets were placed 60 to 70 mm deep inside a test-phantom. Our method led to correct insertions with mean deviation of 0.20 mm with CCD camera and mean deviation of about 1.5 mm in clinical surrounding with an old x-ray imaging system, where the images were not of best quality. These promising results present this method as a serious alternative to other needle placement techniques, which require cumbersome and time consuming calibration procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ryan Andrew Beasley, James D. Stefansic, Jeannette L. Herring, W. Andrew Bass, Alan J. Herline, William C. Chapman, Benoit M. Dawant, Robert L. Galloway Jr.
In interactive, image-guided surgery, current physical space position in the operating room is displayed on various sets of medical images used for surgical navigation. One useful image display format for image-guided hepatic surgery is liver surface renderings. Deep-seated tumors within the liver can be projected onto the surface of these renderings and provide pertinent information concerning the location and size of metastatic liver tumors. Techniques have been developed by our group to create hepatic surface renderings. An independently implemented variation of the marching cubes algorithm is used on segmented livers to create a triangulated surface, which is displayed using OpenGL, a 3D graphics and modeling software library. Tumors are segmented separately form the liver so that their colors differ from that of the liver surface. The liver is then rendered slightly transparent so that tumors can be seen within liver and aid surgeons in preoperative planning. The graphical software is also bundled into a dynamic linked library (DLL) and slaved with ORION, our Windows NT based image-guided surgical system. We have tested our graphics DLL on a liver phantom embedded with 'tumors'. A surface-based registration algorithm was used to map current surgical position onto a transparent phantom rendering that indicates tumor location. The rendering view is update as surgical position is changed. For minimally invasive procedures, we will use the direct linear transformation and the same surface-based registration technique to map rendered tumors directly onto an endoscopic image. This will be especially useful in localizing deep-seated tumors for ablation and resection procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major limitation of the use of endoscopes in minimally invasive surgery is the lack of relative context between the endoscope and its surroundings. The purpose of this work is to map endoscopic images to surfaces obtained from 3D preoperative MR or CT data, for assistance in surgical planning and guidance. To test our methods, we acquired pre- operative CT images of a standard brain phantom from which object surfaces were extracted. Endoscopic images were acquired using a neuro-endoscope tracked with an optical tracking system, and the optical properties of the endoscope were characterized using a simple calibration procedure. Registration of the phantom and CT images was accomplished using markers that could be identified both on the physical object and in the pre-operative images. The endoscopic images were rectified for radial lens distortion, and then mapped onto the extracted surfaces via a ray-traced texture- mapping algorithm, which explicitly accounts for surface obliquity. The optical tracker has an accuracy of about 0.3 mm, which allows the endoscope tip to be localized to within mm. The mapping operation allows the endoscopic images to be effectively 'painted' onto the surfaces as they are acquired. Panoramic and stereoscopic visualization and navigation of the painted surfaces may then be reformed from arbitrary orientations, that were not necessarily those from which the original endoscopic views were acquired.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate placement and expansion of coronary stents is hindered by the fact that most stents are only slightly radiopaque, and hence difficult to see in a typical coronary x-rays. We propose a new technique for improved image guidance of multiple coronary stents deployment using layer decomposition of cine x-ray images of stented coronary arteries. Layer decomposition models the cone-beam x-ray projections through the chest as a set of superposed layers moving with translation, rotation, and scaling. Radiopaque markers affixed to the guidewire or delivery balloon provide a trackable feature so that the correct vessel motion can be measured for layer decomposition. In addition to the time- averaged layer image, we also derive a background-subtracted image sequence which removes moving background structures. Layer decomposition of contrast-free vessels can be used to guide placement of multiple stents and to assess uniformity of stent expansion. Layer decomposition of contrast-filled vessels can be used to measure residual stenosis to determine the adequacy of stent expansion. We demonstrate that layer decomposition of a clinical cine x-ray image sequence greatly improves the visibility of a previously deployed stent. We show that layer decomposition of contrast-filled vessels removes background structures and reduces noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Determining root canal length is a crucial step in success of root canal treatment. Root canal length is commonly estimated based on pre-operation intraoral radiography. 2D depiction of a 3D object is the primary source of error in this approach. Techniques based on impedance measurement are more accurate than radiographic approaches, but do not offer a method for depicting the shape of canal. In this study, we investigated a stererotactic approach for modeling and measurement of root canal of human dentition. A weakly perspective model approximated the projectional geometry. A series of computer-simulated objects was used to test accuracy of this model as the first step. The, to assess the clinical viability of such an approach, endodontic files inserted in the root canal phantoms were fixed on an adjustable platform between a radiographic cone and an image receptor. Parameters of projection matrix were computed based on the relative positions of image receptors, focal spot, and test objects. Rotating the specimen platform from 0 to 980 degrees at 5-degree intervals set relative angulations for stereo images. Root canal is defined as the intersection of two surfaces defined by each projection. Computation of error for length measurement indicates that for angulations greater than 40 degrees the error is within clinically acceptable ranges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we evaluate our ability to achieve consistent image presentation across a wide range of output devices, focusing on digital x-ray radiography for chest applications. In particular we focus on dry versus wet printers of hardcopy prints. In this evaluation, we review the expected theoretical variability using the DICOM grayscale standard display function (GSDF). The GSDF maps DICOM presentation values to luminance values that are perceived by a human. We present our methodology for calibrating devices as evaluated on sixteen printers. Seven devices were selected for a human observer study to determine if there are perceptible differences in the presentation of a given image, focusing on differences between wet and dry processes. It was found that wet printers were preferred, however, there may be other logistical and practical reasons whey dry printers may be used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the structural complexity of the bone, it is difficult to diagnose and make a treatment plan for injuries and diseases in bones. In this paper, we designed and implemented a telediagnosis system for orthopedic deformity analysis based on 3D medical imaging. In order to define the intersseous relationships in each bone and to evaluate a deformity without invasions, the system produces volumetric images by reconstructing the planar images spatially and provides deformity analysis by measuring distance, area, volume and angle among the bones. The reconstructed volumetric images are freely manipulated to simulate surgical operations such as translation, scaling, rotation and so on. Our system integrates three main components: server, clients and communication subsystem. It is also composed of three main functions including the information control manager for event and message process used between client and server, and surgical simulation manager for object visualization and manipulation in individual bones, and the medical database manager for patient information. The system also supports user-friendly graphical user interface and simultaneous use by multiple users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging plays a vital role in the diagnosis and recording of ophthalmic disease and pathology. Of particular interest to ophthalmologists is disease progression. Using conventional viewing techniques this is often difficult to determine. This paper discusses and demonstrates some simple Internet tools that can be used to aid in the dynamic visualization of changes in photographs of the retina and cornea.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a radiologic tele-consultation support system based on an inter-hospital network. The system consists of an image database system, a super-high-definition (SHD) imaging system, a video conferencing system and a high-speed network. The SHD imaging system displays 2048 (H) X 2048 (V) X 8 bit radiological images with sufficient image quality to allow diagnosis. The network connects six facilities including Keio University Hospital at 135 Mbps, and Seiransou Hospital (B) at 6Mbps to the database in NTT Research and Development Center. The system was developed in three stages. First, the system was not designed for just consultation, but also for tele-medicine/tele-radiology; doctors in different hospitals can control their systems independently during a consultation. The first stage system, tele-consultation was suspended until the completion of image transmission at both sites, since image transmission occurs only when requested. The second stage added an image pre-loading function in order to eliminate the time-lag of image transmission. Pre-loading the images was effective but exposed some shortcoming in terms of collaboration. Finally, we implemented several functions for collaboration such as synchronization of image-display and pointer indications on the image. The final system fulfills all requirements and consultations proceeded smoothly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe the implementation and performance of a system for compression of ultrasound sequences based on 2D and 3D subband coding of the pre-scan- converted data and scan conversion. The system is designed for encoding and decoding within ultrasound machines as well as decoding on an external computer. Performance of the encoding and decoding operations and the effect of compression on the quality of the ultrasound sequences are quantitatively evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The National Cancer Institute (NCI) is interested in supporting the development of an image database for lung cancer screening using spiral x-ray CT. A cooperative agreement is envisioned that will involve applications from investigators who are interested in joining a consortium of institutions to construct such a database as a public resource. The intent is to develop standards for generating the database resource and to allow this database to be used for evaluating computer aided diagnostic (CAD) software methods. Initial interest is focused on spiral CT of the lung because of the recent interest in using this imaging modality for lung cancer screening for patients at high risk, where early intervention may significantly reduce cancer mortality rates. The use of CAD methods is rapidly emerging for this large-scale cancer screening application as these methods have the potential of improving the efficiency of screening. Lung imaging is a good physical model in that it involves the use of 3D CAD methods that require critical software optimization for both detection and classification. In addition, the detection of change in the CT images over time, or changes in lung nodule size, has the potential to provide either improved early cancer detection or improved classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A frequently used assessment method of skeletal age is atlas matching by a radiological examination of a hand image against a small set of Greulich-Pyle patterns of normal standards. The method however can lead to significant deviation in age assessment, due to a variety of observers with different levels of training. The Greulich-Pyle atlas based on middle upper class white populations in the 1950s, is also not fully applicable for children of today, especially regarding the standard development in other racial groups. In this paper, we present our system design and initial implementation of a digital hand atlas and computer-aided diagnostic (CAD) system for Web-based bone age assessment. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. The system consists of a hand atlas database, a CAD module and a Java-based Web user interface. The atlas database is based on a large set of clinically normal hand images of diverse ethnic groups. The Java-based Web user interface allows users to interact with the hand image database form browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, is then extracted and compared with patterns from the atlas database to assess the bone age.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a system for CAD of lung cancer screening by CT. The system consists of tow parts, automatic processing part and image based diagnosis part. The automatic processing part is to automatically detect the candidate regions of lung cancer based on the methods we proposed, and the image based diagnosis part is mainly used for doctor to make the mass screening. The result obtained by automatic processing part are provided to image based diagnosis pat as support information for increasing the performance of doctor screening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed and implemented a prototype system to aid in the surgical repair of congenital aural atresia. A two- level segmentation algorithm was first developed to separate tissues of similar intensity or low tissue contrast. Then an interactive visualization modular was built to display the labeled tissues. The system allows a 3-stage interactive planning in which positioning, marking and drilling simulates the surgical operation of congenital atresia repair. A voxel-based volume CSG operation was implemented to ensure the efficiency of interactive planning. Six patients with congenital aural atresia underwent virtual planning in preparation for surgery. This technique has proved to be a valuable planning tool, with the potential for virtual representation of the surgical reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this work was to create a 3D visualization system to aid physicians in observing abnormalities of the human lungs. A series of 20-30 helical CT lung slice images obtained from the lung cancer screening protocol as well as a series of 100-150 diagnostic helical CT lung slice images were used as an input. We designed a segmentation filter to enhance the lung boundaries and filter out small and medium bronchi from the original images. The pairs of original and filtered images were further processed with the contour extraction method to segment out only the lung field for further study. In the next step the segmented lung images containing the small bronchi and lung textures were used to generate the volumetric dataset input for the 3D visualization system. Additional processing for the extracted contour was used to smooth the 3D lung contour in the lung boundaries. The computer program developed allows, among others, viewing of the 3D lung object from various angles, zooming in and out as well as selecting the regions of interest for further viewing. The density and gradient opacity tables are defined and used to manipulate the displayed contents of 3D rendered images. Thus, an effective 'see-through' technique is applied to the 3D lung object for better visual access to the internal lung structures like bronchi and possible cancer masses. These and other features of the resulting 3D lung visualization system give the user a powerful tool to observe and investigate the patient's lungs. The filter designed for this study is a completely new solution that greatly facilitates the boundary detection. The developed 3D visualization system dedicated from chest CT provides the user a new way to explore effective diagnosis of potential lung abnormalities and cancer. In the authors' opinion, the developed system can be successfully used to view and analyze patient's lung CT images in a new powerful approach in both diagnosis and surgery-planning applications. Additionally, we see the possibility of using the system for teaching anatomy as well as pathology of the human lung.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D ultrasound imaging is an emerging and prospective modality in the ultrasound scanning area. Since 3D ultrasound dat are often acquired by translation or rotation of 2D data acquisition systems, the data can be directly sampled on cylindrical or spherical structured girds rather tan on rectilinear grids. However, visualization of cylindrical or spherical data is more complex than that of rectilinear grids. Therefore, conventional rendering methods resample the grids into rectilinear grids and visualize the resampled rectilinear dat. However, resampling introduces an undesired resolution loss. In this paper a direct rendering scheme of cylindrical ultrasound data is considered. Even though cell sin cylindrical grids have different sizes, they are very similar in shape and contain some regularity. We use this similarity and regularity of cells to reduce rendering time in a projection-based rendering method. To achieve high sped rendering, we prose a simple projection ordering method and a fast projection method using a common edge table. And also, to produce good rendering results, an efficient bilinear interpolation scheme is prosed for the hexahedral projection. In this scheme, since weighting coefficients are calculated in the image plane, we can avoid calculating crossing point sin the object space. Based on the proposed techniques above, we can produce high resolution rendered images directly form a cylindrical 3D ultrasound data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses image quality assessment and image quality maintenance of CRT displays for use as soft copy displays in digital radiology. Software is necessary to generate test patterns and to displays them on CRTs. CCD cameras record the images displayed on the CRTs. Additional software is necessary to analyze those images recorded by the CCD camera. This paper describes the development and application of software useful for the generation, display and analysis of test patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A rendering technique is presented that generates integrated intensity projection images for volumetric data with visual enhancement of the data within a region-of-interest. Animated displays result form moving this region through the data volume, either interactively or following a trajectory generated from the data itself. Enhancement is provided by applying a radially symmetric weighting function to each point within the data set based upon its position relative to the center of the region-of-interest and the viewpoint. The resultant integrated value is scaled using the cumulative weighting applied along the projection vector to set the displayed pixel intensity. By bounding the region- of-interest and maintaining unity weighing outside this volume, the rendering process may be decomposed into base and supplemental components. Both may be generated off-line. Alternately, interactive data exploration may be performed using either a pre-processed base projection or, to provide more flexibility, real-time generation of both components. In addition, rendered images may be comprised of base and supplemental components obtained from heterogeneous data volumes, the difference arising either due to processing of a single-modality data ste or by the use of registered multi-modality dat sets. Images are presented showing application of this technique to medical volumetric data sets obtained from MR, CT and ultrasound scanning. PC based software implementation on current hardware allows rendering rates that support interactive exploration of 2563 voxel data sets. Automatic path generation uses segmentation to isolate relevant structures followed by skeletonization to reduce these volumes to path segments. Linking these segments presents a number of problems if the resultant path is to be both efficient and traversed without distracting jumps or other artifacts. The use of back-tracking and the movement tempo are currently being investigate. Future research directions include: optimization of the technique to take account of human visual perception; cache-optimized rendering implementation; the use of color; and more powerful path generation strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to develop optimized prostate needle biopsy protocols, we have investigated spatial distribution of prostate tumors using 3D prostate models developed from prostate specimens with localized cancer. An accurate zone- based algorithm has been proposed and developed to calculate a 3D prostate tumor map using 280 reconstructed 3D prostate surface models. Based on the developed 3D tumor map, optimized prostate needle biopsy protocols can be developed. Online guidance can also be provided with the 3D tumor map to the in vivo prostate biopsy of real patients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present two-tiered QC standards for grayscale monitors. Materials: SMPTE test pattern, line-pair object, radio- opaque ruler, photometer, and magnifier with scale. Results: The following standards are based on our institutional experience in association with a PACS vendor and the literature. Radiologists monitor: small artifacts, disregard <EQ 0.25 mm, 0.25 mm < 2 < 0.35 mm, 0 >= 0.35 mm; no groupings of artifacts; no visible distortion in the 'usable' area; background luminance 0.1 Fl; 'bright' >= 90 Fl, no fine-detail 'blooming'; brightness uniformity <EQ +/- 15 percent of mean value, no visible gradients, measured in at least 9 positions; scales 98 percent; 2.5 linepair/mm visible; 0/5-95/100 SMPTE contrast patches visible; illuminance 50 lux. Clinician monitor; small artifacts disregard <EQ 0.35 mm < 4 < 0.50 mm, 0 >= 0.50 mm; luminance 50 Fl; uniformity <EQ +/- 20 percent, if > +/- 20 percent at only 2 points, these locations can be tested to +/- 20 percent within the local sector; 1 linepair/mm visible; else same. Conclusions: We believe this approach is reasonably achievable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parametric images are generated from the analysis of image data to help characterize the functional information present in the original images. They address the need for enhancing the spatial and temporal resolutions for analysis. Parametric fields provide an underlying model that can be evaluated at any image location using its analytical formulation. To facilitate interactive display and analysis of such fields, we developed a visualization scheme that can help directly render the parametric field using graphics interpolation methods. This eliminates the need for high resolution storage of such data for visualization purposes. The major advantage of such an approach is that graphics hardware can be used to accelerate the interpolation, thus achieving substantial improvement in the performance. Successive derivative parametric images, such as velocity or acceleration maps from the motion fields, can be displayed in real-time. The example presented here is the 4D B-spline based motion field representation of the cardiac-tagged MR images. A motion field with 7 by 7 by 7 by 15 control points shown to adequately describe the full motion of the heart during cardiac cycle. Using this field, material points can be tracked over time and local mechanical properties can be computed. The visualization method presented here utilizes the similarity between the B-spline representation of the motion fields and the graphics hardware support for NURBS display with texture mapping to achieve high performance visualization of these parametric fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Triangle meshes are used to represent surfaces in many applications due to their simplicity. Since the number of triangles often goes beyond the capabilities of computer graphics hardware, a large variety of simplified mesh construction algorithms have been proposed in the last years. But, the reconstruction processes of simplified meshes are generally time-consuming and memory inefficient. In this paper we suggest a fast and memory efficient method that produces high-quality simplified-polygonal-models. The method is based both on Marching Cubes algorithm and on Decimation method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative assessment of regional heart motion has significant potential for more accurate diagnosis of heart disease and/or cardiac irregularities. Local heart motion may be studied from medical imaging sequences. Using functional parametric mapping, regional myocardial motion during a cardiac cycle can be color mapped onto a deformable heart model to obtain better understanding of the structure- to-function relationships in the myocardium, including regional patterns of akinesis or diskinesis associated with ischemia or infarction. In this study, 3D reconstructions were obtained from the Dynamic Spatial Reconstructor at 15 time points throughout one cardiac cycle of pre-infarct and post-infarct hearts. Deformable models were created from the 3D images for each time point of the cardiac cycles. Form these polygonal models, regional excursions and velocities of each vertex representing a unit of myocardium were calculated for successive time-intervals. The calculated results were visualized through model animations and/or specially formatted static images. The time point of regional maximum velocity and excursion of myocardium through the cardiac cycle was displayed using color mapping. The absolute value of regional maximum velocity and maximum excursion were displayed in a similar manner. Using animations, the local myocardial velocity changes were visualized as color changes on the cardiac surface during the cardiac cycle. Moreover, the magnitude and direction of motion for individual segments of myocardium could be displayed. Comparison of these dynamic parametric displays suggest that the ability to encode quantitative functional information on dynamic cardiac anatomy enhances the diagnostic value of 4D images of the heart. Myocardial mechanics quantified this way adds a new dimension to the analysis of cardiac functional disease, including regional patterns of akinesis and diskinesis associated with ischemia and infarction. Similarly, disturbances in regional contractility and filling may be detected and evaluated using such measurements and displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the major drawbacks of Magnetic Resonance Imaging (MRI) has been the lack of a standard and quantifiable interpretation of image intensities. Unlike in other modalities such as x-ray computerized tomography, MR images taken for the same patient on the same scanner at different times may appear different from each other due to a variety of scanner-dependent variations, and therefore, the absolute intensity values do not have a fixed meaning. We have devised a two-step method wherein all images can be transformed in such a way that for the same protocol and body region, in the transformed images similar intensities will have similar tissue meaning. Standardized images can be displayed with fixed windows without the need of per case adjustment. More importantly, extraction of quantitative information with fixed windows without the need of per case adjustment. More importantly, extraction of quantitative information about healthy organs or about abnormalities can be considerably simplified. This paper introduces and compares new variants of this standardizing method that can help to overcome some of the problems with the original method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the compression of grayscale medical ultrasound images using a new compression technique, space- frequency segmentation. This method finds the rate- distortion optimal representation of an image from a large set of possible space-frequency partitions and quantizer combinations. The method is especially effective when the images to code are statistically inhomogeneous, which is the case for medical ultrasound images. We implemented a real compression algorithm based on this method, and applied the resulting algorithm to representation ultrasound images. The result is an effective technique that performs significantly better than a current leading wavelet transform coding algorithm, Set Partitioning In Hierarchical Trees (SPIHT), using the standard objective PSNR distortion measure. The performance of our space-frequency codec is illustrated, and the space-frequency partitions described. To obtain a qualitative measure of our method's performance, we describe an expert viewer study, where images compressed using both space-frequency compression and SPIHT were presented to ultrasound radiologists to obtain expert viewer assessment of the differences in quality between images from the two different methods. The expert viewer study showed the improved quality of space-frequency compressed images compared to SPIHT compressed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Magnetic resonance angiography (MRA) images are usually presented as maximum intensity projections (MIP), and the choice of viewing direction is then critical for the detection of stenoses. We propose a presentation method that uses skeletonization and distance transformations, which visualizes variations in vessel width independent of viewing direction. In the skeletonization, the object is reduced to a surface skeleton and further to a curve skeleton. The skeletal voxels are labeled with their distance to the original background. For the curve skeleton, the distance values correspond to the minimum radius of the object at that point, i.e., half the minimum diameter of the blood vessel at that level. The following image processing steps are performed: resampling to cubic voxels, segmentation of the blood vessels, skeletonization ,and reverse distance transformation on the curve skeleton. The reconstructed vessels may be visualized with any projection method. Preliminary results are shown. They indicate that locations of possible stenoses may be identified by presenting the vessels as a structure with the minimum radius at each point.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visualization of human brain function based on cortical activity is an interesting research work. Since most of the cerebral cortical surface areas are buried inside folds, the complex topology of cortical surface makes the visualization extremely difficult. In this paper, we present a new visualization system involving reconstructing 3D cortical surfaces, rendering and flattening the resulting surface. Besides developing the conventional polygon-based flattening algorithm, we also propose a new voxel-based flattening method that flattens the cortical surface directly for the 3D voxel data. For a polygon-based method, the cerebral cortical surfaces have to be firstly transferred to 3D polygons and then mapped onto a minimally distorted 2D plane. Which is achieved by an optimization procedure. However, the posed new method converts the 3D cortical voxel data directly to a 2D map. The flattened map is then obtained from warping this 2D map to the position of minimum distortions. We demonstrate the utility of two approaches by the entirely and partially flattened cerebral cortex map in MRI images. These approaches will facilitate more refined analysis in the function of cerebral cortex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a networked multimedia display system based on component technologies for the electronic cardiovascular conferences with radiological consultation services. The system consists of two parts: a data acquisition gateway and a multimedia display workstation. The acquisition gateway is used to collect digital data from difference modalities and authorize them in different sessions for conference presentation. The display workstation is used to display static/dynamic radiographic images, or video sequences, ECG and other text information. The display program is designed with functions of image processing, multimedia data manipulation and visualization. In addition, the workstation also integrates with a real time tele-consultation component for the necessary consultation between cardiologists and remote radiologists equipped with a tele-consultation workstation. Finally, we discuss the system clinical performance and the applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the key post-processing algorithms and their software implementing for CR image automatic optimal display in picture archiving and communication system, which compliant with DICOM model of the image acquisition and presentation chain. With the distributed implementation from the acquisition to the display, we achieved the effects of better image visual quality, fast image communication and display, as well as data integrity of archived CR images in PACS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prostate brachytherapy is an effective treatment for localized prostate cancer. Recently, it has been shown that prior to surgery a transrectal ultrasound (TRUS) study of the prostate and pubic arch can effectively assess pubic arch interference (PAI), a major stumbling block for the brachytherapy procedure. This identification is currently being done with uncompressed digital images taken directly from the ultrasound (US) machine. However, since not all US machines allow access to the direct digital images, there is a need to perform TRUS based PAI detection using digitized images. For its clinical advantages, we have chosen to use a consumer video digitizer that saves images in the Joint Photographic Experts Group (JPEG) standard. In this paper, our goal is to assess whether, even with some loss of information due to JPEG compression, the degraded TRUS image is still viable for clinical assessment of PAI. This was accomplished by using a PAI assessment algorithm to predict the location of the pubic arch on both lossless uncompressed bitmap images and 11 degrees of lossy JPEG compressed images. The predicted locations of the arches were compared to each other and the true location of the arch. Our results show that there is no clinically significant difference in assessing PAI using images with medium-to-high JPEG compression compared to using uncompressed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-aided visualization and segmentation of structures of interest in ultrasound images require speckle noise reduction typically via lowpass filtering at the cost of blurring the edges. We have found that an advanced technique called 'sticks' algorithm, the possible locations of a reflector is modeled by a set of short line segments each with a different orientation. The goal is to select a particular stick out of the many different possible stick orientations, which best describes a reflector in the neighborhood. The original 'sticks' algorithm assumes that ultrasound echo could originate from a reflector positioned in any orientation with respect to the incident ultrasound beam, which is not a 100 percent valid assumption. Instead, for every possible reflector location in the image, we calculate the prior probability of different possible stick templates. Using Bayesian decision theory, we have integrated this additional information into the original sticks algorithm. The results indicate that similar to the original sticks algorithms, the speckles are reduced while preserving the edges in the image. In addition, the new sticks algorithm is faster than the original algorithm by a factor of 4.2. This approach is one of the few attempts in ultrasound image segmentation where the knowledge of the imaging process, such as the transducer position, has been incorporated for improved contrast enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MPEG-4 is a new standard for compressing and presenting many types of multimedia content, such as video, audio, and synthetic 2D and 3D graphics. New features include support for user interaction and flexible display of multiple video bitstreams. The basis of these new capabilities is object- based video coding, in which a video image is represented as a set of regions of interest, or video objects, that are coded independently. At the decoder, users decode, compose and manipulate video objects from one or more bitstreams in a single display. In this work, we examine the feasibility of using MPEG-4 for coding ultrasound sequences. In preliminary results, the compression performance of MPEG-4 was comparable to H.264 and a bit savings of at least 15 percent was possible when coding static objects as sprites. The flexible compositing capability of MPEG-4 was demonstrated by dividing an ultrasound machine's display into video objects and encoding each video object as a separate bitstream. Video objects form different bitstreams were decoded and composited on a single display using an MPEG-4 decoder to demonstrate side-by-side comparisons of ultrasound scans. Until now, these compositing capabilities were only available using proprietary PACS display systems. Using MPEG-4 to deliver ultrasound allows any MPEG-4- compliant decoder to perform these functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D tomographic reconstruction of high contrast objects such as contrast agent enhanced blood vessels or bones from x-ray images acquired by isocentric C-arm systems recently gained interest. For tomographic reconstruction, a sequence of images is captured during the C-arm rotation around the patient and the precise projection geometry has to be determined for each image. This is a difficult task, as C- arms usually do not provide accurate information about their projection geometry. Standard methods propose the use of an x-ray calibration phantom and an offline calibration, when the motion of the C-arm is supposed to be reproducible between calibration and patient run. However, mobile C-arms usually do not have this desirable property. Therefore, an online recovery of projection geometry is necessary. Here, we study the use of external tracking systems such as Polaris or Optotrak from Northern Digital, Inc., for online calibration. In order to use the external tracking system for recovery of x-ray projection geometry two unknown transformations have to be estimated. We describe our attempt to solve this calibration problem. These are the relations between x-ray imaging system and marker plate of the tracking system as well as worked and sensor coordinate system. Experimental result son anatomical data are presented and visually compared with the results of estimating the projection geometry with an x-ray calibration phantom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A video processing and display system for performing high speed geometrical image transformations has been designed. It involves looking up the video image by using a pointer memory. The system supports any video format which does not exceed the clock rate that the system supports. It also is capable of changing the brightness and colormap of the image through hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatial localizers provide a reference coordinate system and make the tracking of various objects in 3D space feasible. A number of different spatial localizers are currently available. Several factors that determine the suitability of a position sensor for a specific clinical application are accuracy, ease of use, and robustness of performance when used in a clinical environment. In this paper, we present a new and low-cost sensor with performance unaffected by a the materials present in the operating environment. This new spatial localizer consists of a flexible tape with a number of fiber optic sensor along its length. The main idea is that we can obtain the position and orientation of the end of the tape with respect to its base. The end and base of the tape are locations along its length determined by the physical location of the fiber optic sensors. Using this tape, we tracked an ultrasound probe and formed 3D US data sets. In order to validate the geometric accuracy of those 3D data sets, we measured known volumes of water-filled balloons. Our results indicate that we can measure volumes with accuracy between 2-16 percent. Given the fact that the sensor is under further development and refinement, we expect that this sensor could be an accurate, cost-effective and robust alternative in many medical applications, e.g., image-guided surgery and 3D ultrasound imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose is to provide an understanding of the terminology and performance attributes that define a CRT display system to users and specifier's of softcopy imaging. Starting with the fundamentals that influence display attributes and how they are interrelated, the participant will be able to understand, interpret and evaluate specifications form vendors that affect front of screen quality and how to quantify using commercially available test patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.