Ray casting is the most frequently used algorithm in direct volume rendering for displaying medical data, although it is
computationally very expensive. Recent hardware improvements have allowed ray casting to be used in real-time,
however, there is room for performance gains to take advantage of the recent development of general-purpose graphical
processing units (GPU). The purpose of this paper is to implement the volume ray casting with the Compute Unified
Device Architecture (CUDA) to obtain higher rendering performance. The experimental results show that the new
algorithm is up to 15 times faster than the conventional CPU-based ray casting algorithm.
Abnormal stretch and strain is a major cause of injury to the spinal cord and brainstem. Such forces can develop from
age-related degeneration, congenital malformations, occupational exposure, or trauma such as sporting accidents,
whiplash and blast injury. While current imaging technologies provide excellent morphology and anatomy of the spinal
cord, there is no validated diagnostic tool to assess mechanical stresses exerted upon the spinal cord and brainstem.
Furthermore, there is no current means to correlate these stress patterns with known spinal cord injuries and other
clinical metrics such as neurological impairment. We have therefore developed the spinal cord stress injury assessment
(SCOSIA) system, which uses imaging and finite element analysis to predict stretch injury. This system was tested on a
small cohort of neurosurgery patients. Initial results show that the calculated stress values decreased following surgery,
and that this decrease was accompanied by a significant decrease in neurological symptoms. Regression analysis
identified modest correlations between stress values and clinical metrics. The strongest correlations were seen with the
Brainstem Disability Index (BDI) and the Karnofsky Performance Score (KPS), whereas the weakest correlations were
seen with the American Spinal Injury Association (ASIA) scale. SCOSIA therefore shows encouraging initial results
and may have wide applicability to trauma and degenerative disease involving the spinal cord and brainstem.
Transbronchial needle aspiration (TBNA) is a common method used to collect tissue for diagnosis of different chest
diseases and for staging lung cancer, but the procedure has technical limitations. These limitations are mostly related to
the difficulty of accurately placing the biopsy needles into the target mass. Currently, pulmonologists plan TBNA by
examining a number of Computed Tomography (CT) scan slices before the operation. Then, they manipulate the
bronchoscope down the respiratory track and blindly direct the biopsy. Thus, the biopsy success rate is low. The
diagnostic yield of TBNA is approximately 70 percent.
To enhance the accuracy of TBNA, we developed a TBNA needle with a tip position that can be electromagnetically
tracked. The needle was used to estimate the bronchoscope's tip position and enable the creation of corresponding
virtual bronchoscopic images from a preoperative CT scan. The TBNA needle was made with a flexible catheter
embedding Wang Transbronchial Histology Needle and a sensor tracked by electromagnetic field generator. We used
Aurora system for electromagnetic tracking.
We also constructed an image-guided research prototype system incorporating the needle and providing a user-friendly
interface to assist the pulmonologist in targeting lesions. To test the feasibility of the accuracy of the newly developed
electromagnetically-tracked needle, a phantom study was conducted in the interventional suite at Georgetown University
Hospital. Five TBNA simulations with a custom-made phantom with a bronchial tree were performed. The experimental
results show that our device has potential to enhance the accuracy of TBNA.
In this paper, we present the design and implementation of a new rendering method based on high dynamic range (HDR)
lighting and exposure control. This rendering method is applied to create video images for a 3D virtual bronchoscopy
system. One of the main optical parameters of a bronchoscope's camera is the sensor exposure. The exposure adjustment
is needed since the dynamic range of most digital video cameras is narrower than the high dynamic range of real scenes.
The dynamic range of a camera is defined as the ratio of the brightest point of an image to the darkest point of the same
image where details are present. In a video camera exposure is controlled by shutter speed and the lens aperture. To
create the virtual bronchoscopic images, we first rendered a raw image in absolute units (luminance); then, we simulated
exposure by mapping the computed values to the values appropriate for video-acquired images using a tone mapping
operator. We generated several images with HDR and others with low dynamic range (LDR), and then compared their
quality by applying them to a 2D/3D video-based tracking system. We conclude that images with HDR are closer to real
bronchoscopy images than those with LDR, and thus, that HDR lighting can improve the accuracy of image-based
Direct volume rendering via consumer PC hardware has become an efficient tool for volume visualization. In particular, the volumetric ray casting, the most frequently used volume rendering technique, can be implemented by the shading language integrated with graphical processing units (GPU). However, to produce high-quality images offered by GPU-based volume rendering, a higher sampling rate is usually required. In this paper, we present an algorithm to generate high quality images with a small number of slices by utilizing displaced pixel shading technique. Instead of sampling points along a ray with the regular interval, the actual surface location is calculated by the linear interpolation between the outer and inner points, and this location is used as the displaced pixel for the iso-surface illumination. Multi-pass and early Z-culling techniques are applied to improve the rendering speed. The first pass simply locates and stores the exact surface depth of each ray using a few pixel instructions; then, the second pass uses instructions to shade the surface at the previous position. A new 3D edge detector from our previous research is integrated to provide more realistic rendering results compared with the widely used gradient normal estimator. To implement our algorithm, we have made a program named DirectView based on DirectX 9.0c and Microsoft High Level Shading Language (HLSL) for volume rendering. We tested two data sets and discovered that our algorithm can generate smoother and more accurate shading images with a small number of intermediate slices.
While image guidance is now routinely used in the brain in the form of frameless stereotaxy, it is beginning to be more widely used in other clinical areas such as the spine. At Georgetown University Medical Center, we are developing a program to provide advanced visualization and image guidance for minimally invasive spine procedures. This is a collaboration between an engineering-based research group and physicians from the radiology, neurosurgery, and orthopaedics departments. A major component of this work is the ISIS Center Spine Procedures Imaging and Navigation Engine, which is a software package under development as the base platform for technical advances.
The purpose of this work was to create a 3D visualization system to aid physicians in observing abnormalities of the human lungs. A series of 20-30 helical CT lung slice images obtained from the lung cancer screening protocol as well as a series of 100-150 diagnostic helical CT lung slice images were used as an input. We designed a segmentation filter to enhance the lung boundaries and filter out small and medium bronchi from the original images. The pairs of original and filtered images were further processed with the contour extraction method to segment out only the lung field for further study. In the next step the segmented lung images containing the small bronchi and lung textures were used to generate the volumetric dataset input for the 3D visualization system. Additional processing for the extracted contour was used to smooth the 3D lung contour in the lung boundaries. The computer program developed allows, among others, viewing of the 3D lung object from various angles, zooming in and out as well as selecting the regions of interest for further viewing. The density and gradient opacity tables are defined and used to manipulate the displayed contents of 3D rendered images. Thus, an effective 'see-through' technique is applied to the 3D lung object for better visual access to the internal lung structures like bronchi and possible cancer masses. These and other features of the resulting 3D lung visualization system give the user a powerful tool to observe and investigate the patient's lungs. The filter designed for this study is a completely new solution that greatly facilitates the boundary detection. The developed 3D visualization system dedicated from chest CT provides the user a new way to explore effective diagnosis of potential lung abnormalities and cancer. In the authors' opinion, the developed system can be successfully used to view and analyze patient's lung CT images in a new powerful approach in both diagnosis and surgery-planning applications. Additionally, we see the possibility of using the system for teaching anatomy as well as pathology of the human lung.