Endorectal ultrasound is currently the gold standard for the staging of rectal cancer; however, the accurate staging of the disease requires extensive training and is difficult, especially for those clinicians who do not see a large number of patients per year. Therefore, there is a need for a semi-automatic staging system to assist the clinicians in the accurate staging of rectal cancer. We believe that the unwrapping of the circular ERUS images captured by a spatially tracked ERUS system is a step in this direction. The steps by which a 2D image can be unwrapped are described thereby allowing the circular layers of the rectal wall to be displayed as flat layers stacked on top of each other. We test the unwrapping process using images from a cylindrical rectal phantom and a human rectum. The process of unwrapping endorectal ultrasound images qualitatively provides good visualization of the layers of the rectal wall and rectal tumors and supports the continual study of this novel staging system.
Intensity differences between objects are often used for segmentation. In an ideal situation, these differences permit the
computation of edges that form complete contours around objects in the image. However, edges found in real images
are usually a set of real and spurious disconnected boundary segments. Even more challenging are those so called
apparent or subjective contours whose boundary are not defined by intensity or texture variations. In this paper, we
present a novel method to segment and reconstruct images with missing boundaries, including images with large
missing edges commonly found in ultrasound imaging. We test our algorithm on classic synthetic images, phantom
images and on real ultrasound images of the bladder, heart, and colon.
The development of image-guided surgical systems (IGS) has had a significant impact on clinical neurosurgery and the desire to extend these principles to other surgical endeavors is the next step in IGS evolution. An impediment to its widespread adoption is the realization that the organ of interest often deforms due to common surgical loading conditions. As a result, alignment degradation between patient and the MR/CT image volume can occur which can compromise guidance fidelity. Recently, computational approaches to correct alignment have been proposed within neurosurgery. In this work, these approaches are extended for use within image-guided liver surgery and demonstrate this framework's adaptability. Results from the registration of the preoperative segmented liver surface and the intraoperative liver, as acquired by a laser range scanner, demonstrate accurate visual alignment in regions that deform minimally while in other regions misalignment due to deformations on the order of 1 cm are apparent. A model-updating strategy is employed which uses the closest point operator to compensate for deformations within the patient-specific image volume. The framework presented is an approach whereby laser range scanning coupled to a computational model of soft tissue deformation provide the necessary information to extend IGS principles to intra-abdominal explorative surgery applications.
Image registration is an important procedure for medical diagnosis. Since the large inter-site retrospective validation study led by Fitzpatrick at Vanderbilt University, voxel-based methods and more specifically mutual information (MI) based registration methods have been regarded as the method of choice for rigid-body intra-subject registration problems. In this study we propose a method that is based on the iterative closest point (ICP) algorithm and a pre-computed closest point map obtained with a slight modification of the fast marching method proposed by Sethian. We also propose an interpolation scheme that allows us to find the corresponding points with a sub-voxel accuracy even though the closest point map is defined on a regular grid. The method has been tested both on synthetic and real images and registration results have been assessed quantitatively using the data set provided by the Retrospective Registration Evaluation Project. For these volumes, MR and CT head surfaces were extracted automatically using a level-set technique. Results show that on these data sets this registration method leads to accuracy numbers that are comparable to those obtained with voxel-based methods.