With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance
can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum
cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers.
These navigation systems support the localization of anatomical targets, support placement of imaging probe and
instruments, and provide fusion imaging. The unique architecture – low-cost, miniature, in-hand stereo vision cameras
fitted directly to imaging probes – allows for an intuitive workflow that fits a wide variety of specialties such as
anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of
which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite
skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the
mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated
marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated
markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion
views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations
of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present
technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
B-mode ultrasound is widely used in liver ablation. However, the necrosis zone is typically not visible under
b-mode ultrasound, since ablation does not necessarily change the acoustic properties of the tissue. In contrast,
the change in tissue stiffness makes elastography ideal for monitoring ablation. Tissue palpation for elastography
is typically applied at the imaging probe, by indenting it slightly into the tissue surface. However, in this paper
we propose an alternate approach, where palpation is applied by a surgical instrument located inside the tissue.
In our approach, the ablation needle is placed inside a steerable device called an active cannula and inserted into
the tissue. A controlled motion is applied to the center of the ablation zone via the active cannula. Since the
type and direction of motion is known, displacement can then be computed from two frames with the desired
motion. The elastography results show the ablated region around the needle.
While internal palpation provides excellent local contrast, freehand palpation from outside of the tissue via
the transducer can provide a more global view of the region of the interest. For this purpose, we used a tracked
3D transducer to generate volumetric elastography images covering the ablated region. The tracking information
is employed to improve the elastography results by selecting volume pairs suitable for elastography. This is an
extension of our 2D frame selection technique which can cope with uncertainties associated with intra-operative
elastography. In our experiments with phantom and ex-vivo tissue, we were able to generate high-quality images
depicting the boundaries of the hard lesions.
Registration is a key technology in image-guided navigation systems. By aligning pre-operative images with the
intra-operative setting these systems provide visual feedback that improves the physician's understanding of the
spatial relationships between anatomical structures and surgical tools. Most often the alignment is obtained
using fiducials. Another option is to replace the use of fiducials with intra-operative imaging. Two dimensional
ultrasound (US) is a widely available intra-operative non-ionizing imaging modality. To utilize this modality for
registration one must first perform spatial calibration of the US. In this work we describe the implementation
of three spatial calibration methods as part of the image-guided surgery toolkit (IGSTK). The implementation
follows the IGSTK calibration framework, separating algorithmic aspects from user interaction aspects of the
calibration. Our calibration framework includes three methods. The first is a phantom-less method using a
tracked pointer tool in addition to the tracked US, the second method uses a cross-wire phantom, and the third
method is based on the use of a plane phantom.
Despite the success of ultrasound elasticity imaging (USEI) in medical applications such as diagnosis and screening
of breast lesions and prostate cancer, USEI has not been adopted in routine clinical procedures. This is partly
caused by the difficulty in acquiring reliable images and interpreting them, the lack of consistency over time,
and the dependency of image quality to the expertise of the user. We previously demonstrated the potential of
exploiting an external tracker to partially alleviate these issues and enhance the quality of USEI. The tracking
data enabled fast and automatic selection of pairs of RF frames used in strain calculation. Here, we expand this
method by including new features. The proposed method employs image content to compensate for the limited
accuracy of the tracking device. It also combines multiple strain images to improve the quality of the final image.
For this purpose, It normalizes the images and determines which images can be combined relying on the tracking
information. We have acquired RF frames synchronized with tracking data from livers of pig containing an
ablated region and a breast phantom using two different tracking devices; an optical tracker and a less accurate
electromagnetic tracker. We present the promising results of the proposed method and investigate the sensitivity
of frame selection technique without using the image content to inaccuracies in tracking information.
Identifying the proper orientation of the pelvis is a critical step in accurate placement of the femur prosthesis in the
acetabulum in Total Hip Replacement (THR) surgeries. The general approach to localize the orientation of the pelvis
coordinate system is to use X-ray fluoroscopy to guide the procedure. An alternative can be employing intra-operative
ultrasound (US) imaging with pre-operative CT scan or fluoroscopy imaging. In this paper, we propose to replace the
need of pre-operative imaging by using a statistical shape model of the pelvis, constructed from several CT images. We
then propose an automatic deformable intensity-based registration of the anatomical atlas to a sparse set of 2D
ultrasound images of the pelvis in order to localize its anatomical coordinate system. In this registration technique, we
first extract a set of 2D slices from a single instance of the pelvic atlas. Each individual 2D slice is generated based on
the location of a corresponding 2D ultrasound image. Next, we create simulated ultrasound images out of the 2D atlas
slices and calculate a similarity metric between the simulated images and the actual ultrasound images. The similarity
metric guides an optimizer to generate an instance of the atlas that best matches the ultrasound data. We demonstrated
the feasibility of our proposed approach on two male human cadaver data. The registration was able to localize a
patient-specific pelvic coordinate system with origin translation error of 2 mm and 3.45 mm, and average axes rotation
error of 3.5 degrees and 3.9 degrees for the two cadavers, respectively.
Registration of pre-operative CT and freehand intra-operative ultrasound of lumbar spine could aid surgeons in
the spinal needle injection which is a common procedure for pain management. Patients are always in a supine
position during the CT scan, and in the prone or sitting position during the intervention. This leads to a difference
in the spinal curvature between the two imaging modalities, which means a single rigid registration cannot be
used for all of the lumbar vertebrae. In this work, a method for group-wise registration of pre-operative CT and
intra-operative freehand 2-D ultrasound images of the lumbar spine is presented. The approach utilizes a pointbased
registration technique based on the unscented Kalman filter, taking as input segmented vertebrae surfaces
in both CT and ultrasound data. Ultrasound images are automatically segmented using a dynamic programming
approach, while the CT images are semi-automatically segmented using thresholding. Since the curvature of the
spine is different between the pre-operative and the intra-operative data, the registration approach is designed to
simultaneously align individual groups of points segmented from each vertebra in the two imaging modalities. A
biomechanical model is used to constrain the vertebrae transformation parameters during the registration and to
ensure convergence. The mean target registration error achieved for individual vertebrae on five spine phantoms
generated from CT data of patients, is 2.47 mm with standard deviation of 1.14 mm.
Breast irradiation significantly reduces the risk of recurrence of cancer. There is growing evidence suggesting that
irradiation of only the involved area of the breast, partial breast irradiation (PBI), is as effective as whole breast
irradiation. Benefits of PBI include shortened treatment time, and perhaps fewer side effects as less tissue is
treated. However, these benefits cannot be realized without precise and accurate localization of the lumpectomy
cavity. Several studies have shown that accurate delineation of the cavity in CT scans is very challenging and
the delineated volumes differ dramatically over time and among users.
In this paper, we propose utilizing 3D ultrasound (3D-US) and tracked strain images as complementary
modalities to reduce uncertainties associated with current CT planning workflow. We present the early version
of an integrated system that fuses 3D-US and real-time strain images. For the first time, we employ tracking
information to reduce the noise in calculation of strain image by choosing the properly compressed frames and
to position the strain image within the ultrasound volume. Using this system, we provide the tools to retrieve
additional information from 3D-US and strain image alongside the CT scan. We have preliminarily evaluated
our proposed system in a step-by-step fashion using a breast phantom and clinical experiments.
In image-guided bone surgery, sample points collected from the surface of the bone are registered to the preoperative
CT model using well-known registration methods such as Iterative Closest Point (ICP). These techniques
are generally very sensitive to the initial alignment of the datasets. Poor initialization significantly increases
the chances of getting trapped local minima. In order to reduce the risk of local minima, the registration
is manually initialized by locating the sample points close to the corresponding points on the CT model.
In this paper, we present an automatic initialization method that aligns the sample points collected from the
surface of pelvis with CT model of the pelvis. The main idea is to exploit a mean shape of pelvis created from a
large number of CT scans as the prior knowledge to guide the initial alignment. The mean shape is constant for
all registrations and facilitates the inclusion of application-specific information into the registration process. The
CT model is first aligned with the mean shape using the bilateral symmetry of the pelvis and the similarity of
multiple projections. The surface points collected using ultrasound are then aligned with the pelvis mean shape.
This will, in turn, lead to initial alignment of the sample points with the CT model. The experiments using a
dry pelvis and two cadavers show that the method can align the randomly dislocated datasets close enough for
successful registration. The standard ICP has been used for final registration of datasets.