Ubiquitous use of 2D ultrasound (US) is limited by the difficulty in interpretation of images for an untrained
operator. We present a solution for operator guidance through visual cues via registration of US to a 3D model.
The method is demonstrated on 2D echocardiography data, where we are able to localize the scan plane in
relation to the standard planes on the 3D model. Our algorithm operates by pre-processing both the US and
CT images to the most basic information- muscle, blood pool - using classification. Subsequently, these labels
are registered using the match cardinality metric for binary labeled images. We evaluated our method on four
parasternal long-axis and three parasternal short-axis images from different patients. Results show that our
system is able to correctly distinguish between the different US standard views and is able to localize the scan
on the 3D model, correctly on five out of seven cases.
Physiological activities like respiration and interventional procedures non-linearly alter the structural and
functional configuration of Hepato-Pulmonary system. Structurally, respiration-induced motion poses a significant
obstacle in the precise target localization for minimally invasive hepato-pulmonary procedures. Current motion
compensating approaches with image guided advance-and-check intraOperative systems are inadequate. Spatiotemporal
augmentation of intraOperative images with motion maps derived from preOperative scans will provide a reliable
roadmap for successful intervention. However, judicious choice of deformable techniques is required to accurately
capture the organ specific motion. In this paper, we evaluate a number of oft-cited deformable registration techniques in
terms of deformation quality, algorithmic convergence and per-iteration cost. Recommendations are proposed based on
the convergence measures and smoothness of the motion maps.
Volume rendering has high utility in visualization of segmented datasets. However, volume rendering of the segmented labels along with the original data causes undesirable intermixing/bleeding artifacts arising from interpolation at the sharp boundaries. This issue is further amplified in 3D textures based volume rendering due to the inaccessibility of the interpolation stage. We present an approach which helps minimize intermixing artifacts while maintaining the high performance of 3D texture based volume rendering - both of which are critical for intra-operative visualization. Our approach uses a 2D transfer function based classification scheme where label distinction is achieved through an encoding that generates unique gradient values for labels. This helps ensure that labelled voxels always map to distinct regions in the 2D transfer function, irrespective of interpolation. In contrast to previously reported algorithms, our algorithm does not require multiple passes for rendering and supports greater than 4 masks. It also allows for real-time modification of the colors/opacities of the segmented structures along with the original data. Additionally, these capabilities are available with minimal texture memory requirements amongst comparable algorithms. Results are presented on clinical and phantom data.