3D spatial compounding involves the combination of two or more 3D ultrasound (US) data sets, acquired under different insonation angles and windows, to form a higher quality 3D US data set. An important requirement for this method to succeed is the accurate registration between the US images used to form the final compounded image. We have developed a new automatic method for rigid and deformable registration of 3D US data sets, acquired using a freehand 3D US system. Deformation is provided by using a 3D thin-plate spline (TPS). Our method is fundamentally different from the previous ones in that the acquired scattered US 2D slices are registered and compounded directly into the 3D US volume. Our approach has several benefits over the traditional registration and spatial compounding methods: (i) we only peform one 3D US reconstruction, for the first acquired data set, therefore we save the computation time required to reconstruct subsequent acquired scans, (ii) for our registration we use (except for the first scan) the acquired high-resolution 2D US images rather than the 3D US reconstruction data which are of lower quality due to the interpolation and potential subsampling associated with 3D reconstruction, and (iii) the scans performed after the first one are not required to follow the typical 3D US scanning protocol, where a large number of dense slices have to be acquired; slices can be acquired in any fashion in areas where compounding is desired. We show that by taking advantage of the similar information contained in adjacent acquired 2D US slices, we can reduce the computation time of linear and nonlinear registrations by a factor of more than 7:1, without compromising registration accuracy. Furthermore, we implemented an adaptive approximation to the 3D TPS with local bilinear transformations allowing additional reduction of the nonlinear registration computation time by a factor of approximately 3.5. Our results are based on a commercially available tissue-mimicking abdominal phantom and in-vivo muscle data.
Proc. SPIE. 4319, Medical Imaging 2001: Visualization, Display, and Image-Guided Procedures
KEYWORDS: 3D acquisition, 3D image reconstruction, Visualization, Ultrasonography, Data processing, Reconstruction algorithms, Position sensors, Algorithm development, 3D visualizations, 3D image processing
3D ultrasound (US) provides physicians with a better understanding of human anatomy. By manipulating the 3D US data set, physicians can observe the anatomy in 3D from a number of different view directions and obtain 2D US images that would not be possible to directly acquire with the US probe. In order for 3D US to be in widespread clinical use, creation and manipulation of the 3D US data should be done at interactive times. This is a challenging task due to the large amount of data to be processed. Our group previously reported interactive 3D US imaging using a programmable mediaprocessor, Texas Instruments TMS320C80, which has been in clinical use. In this work, we present the algorithms we have developed for real-time 3D US using a newer and more powerful mediaprocessor, called MAP-CA. MAP-CA is a very long instruction word (VLIW) processor developed for multimedia applications. It has multiple execution units, a 32-kbyte data cache and a programmable DMA controller called the data streamer (DS). A forward mapping 6 DOF (for a freehand 3D US system based on magnetic position sensor for tracking the US probe) reconstruction algorithm with zero- order interpolation is achieved in 11.8 msec (84.7 frame/sec) per 512x512 8-bit US image. For 3D visualization of the reconstructed 3D US data sets, we used volume rendering and in particular the shear-warp factorization with the maximum intensity projection (MIP) rendering. 3D visualization is achieved in 53.6 msec (18.6 frames/sec) for a 128x128x128 8-bit volume and in 410.3 msec (2.4 frames/sec) for a 256x256x256 8-bit volume.
A number of surgical procedures are planned and executed based on medical images. Typically, x-ray computed tomography (CT) and magnetic resonance (MR) images are acquired preoperatively for diagnosis and surgical planning. In the operating room, execution of the surgical plan becomes feasible due to registration between preoperative images and surgical space where patient anatomy lies. In this paper, we present a new automatic algorithm where we use ultrasound (US) 2D B-mode images to register the preoperative MR image coordinate system with the surgical space which in our experiments is represented by the reference coordinate system of a DC magnetic position sensor. The position sensor is also used for tracking the position and orientation of the US images. Furthermore, we simulated patient anatomy by using custom-built phantoms. Our registration algorithm is a hybrid between fiducial- based and image-based registration algorithms. Initially, we perform a fiducial-based rigid-body registration between MR and position sensor space. Then, by changing various parameters of the rigid-body fiducial-based transformation, we produce an MR-sensor misregistration in order to simulate potential movements of the skin fiducials and/or the organs. The perturbed transformation serves as the initial estimate for the image-based registration algorithm, which uses normalized mutual information as a similarity measure, where one or more US images of the phantom are automatically matched with the MR image data set. By using the fiducial- based registration as the gold standard, we could compute the accuracy of the image-based registration algorithm in registering MR and sensor spaces. The registration error varied depending on the number of 2D US images used for registration. A good compromise between accuracy and computation time was the use of 3 US slices. In this case, the registration error had a mean value of 1.88 mm and standard deviation of 0.42 mm, whereas the required computation time was approximately 52 sec. Subsampling the US data by a factor of 4 X 4 and reducing the number of histogram bins to 128 reduced the computation time to approximately 6 sec. with a small increase in the registration error.
Spatial localizers provide a reference coordinate system and make the tracking of various objects in 3D space feasible. A number of different spatial localizers are currently available. Several factors that determine the suitability of a position sensor for a specific clinical application are accuracy, ease of use, and robustness of performance when used in a clinical environment. In this paper, we present a new and low-cost sensor with performance unaffected by a the materials present in the operating environment. This new spatial localizer consists of a flexible tape with a number of fiber optic sensor along its length. The main idea is that we can obtain the position and orientation of the end of the tape with respect to its base. The end and base of the tape are locations along its length determined by the physical location of the fiber optic sensors. Using this tape, we tracked an ultrasound probe and formed 3D US data sets. In order to validate the geometric accuracy of those 3D data sets, we measured known volumes of water-filled balloons. Our results indicate that we can measure volumes with accuracy between 2-16 percent. Given the fact that the sensor is under further development and refinement, we expect that this sensor could be an accurate, cost-effective and robust alternative in many medical applications, e.g., image-guided surgery and 3D ultrasound imaging.
Stereotactic systems are based on preoperative tomographic images for assisting neurosurgeons to accurately guide the surgical tool into the brain. In addition, intraoperative ultrasound (US) images are used to monitor the brain in real time during the surgical procedure. The main disadvantage of stereotactic system is that preoperative images can become outdated during the course of the surgery. The main disadvantage of intraoperative US is the low signal-to-noise ratio that prevents the surgeon from appreciating the contents and orientation of the US images. A system that combines preoperative tomographic with intraoperative US imags could overcome the above-mentioned disadvantages. We have successfully developed and implemented a new PC-based system for interactive 3D registration of US and magnetic resonance images. Our software is written in Microsoft Visual C++ and it runs on a Pentium II 450-MHz PC. We have performed an extensive analysis of the errors of our system with a custom-built phantom. The registration error between US and MR space was dependent on the depth of the target within the US image. For a 3.5-MHz phased 1D array transducer and a depth of 6 cm, the mean value of the registration error was 2.00 mm and the standard deviation was 0.75 mm. The registered MR images were reconstructed using either zero-order or first-order interpolation. The interactive nature of our system demonstrates its potential to be used in the operating room.
A new method for calibrating a free-hand 3D ultrasound system based on a DC magnetic tracking position sensor is presented. The method uses a linear affine transformation for registering the individual 2D US images in the position sensor's reference coordinate system. The affine transformation allows for automatic scaling of the digitized ultrasound image in physical dimensions. We have also introduced a spherical tissue-mimicking phantom to facilitate the calibration procedure. The system's calibration was validated in a clinical setting by determining its precision in localizing a single point and its accuracy in measuring distances and volumes. Precision in localizing a point was consistently below 1.9 mm (rms error). The bias in distance measurements was 0.06 mm and the standard deviation was 0.75 mm. The accuracy of measuring volumes for a one-pass can was below 1% whereas for multiple pass scans it was below 4%. Furthermore, the performance of the DC magnetic tracking system was tested separately from the ultrasound scanner. We conclude that the ultrasound scanner rather than the position sensor is the limiting factor in the geometric accuracy of the 3D US system.
Pulsed photoacoustic (PA) signals may be used for the detection and imaging of blood vessels in tissue. A relatively strong absorption by red blood cells and low absorption by the surrounding tissue, combined with a reasonable penetration depth of the light is found at a wavelength of ca. 577 nm. Experiments were performed with a pulsed frequency doubled Nd:YAG laser which delivered 10 ns pulses at 532 nm wavelength. Ten percent dilutions of India ink and 50% suspensions of red blood cells in PBS were used as optical absorbers. Blood vessels were simulated by hollow nylon fibers with an inner diameter of ca. 250 micrometer through which these suspensions flow. The optical scattering of the surrounding tissue was simulated by a 12% dilution of Intralipid-10% to get a solution with a reduced scattering coefficient of 1.8 mm<SUP>-1</SUP>. The PA signals were detected with a hydrophone that contained four wide band piezoelectric transducers made of 9 micrometer thick PVdF film with an effective diameter of 200 micrometers. Laser pulses with energies up to 8 microjoules were delivered to the sample by a 50 or 100 micrometer core diameter glass fiber. Pulsed optical heating of red blood cells up to 30 - 35 degrees for more than 12,000 times did not affect the photoacoustic response of the cells. If a single fiber is used to illuminate the sample, then even at a depth of 1 mm the PA signals show that the volume that is effectively illuminated is laterally restricted to a diameter of ca. 1 mm. Vessels with blood or ink dilutions were detected up to a depth of more than 1 mm in the scattering medium. Monte- Carlo (MC) simulations were used to simulate the spatial distribution of light absorption in phantom tissue. From this distribution the PA response of blood vessels was simulated. A delay-and-sum beam forming algorithm was developed for 3-D near field configurations and applied to a PA image reconstruction program. The images based on MC simulations as well as experimental data show that the side of larger vessels that is facing the illuminating fiber can be located with a resolution that depends on the configuration and varies between 0.1 and 1 time the inner vessel diameter. This shows the principle and the feasibility of three dimensional photoacoustic dermal tissue imaging.