PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Previous studies have shown that guidance systems improve accuracy and reduce skill variation among physicians during bronchoscopy. However, most of these systems suffer from one or more of the following limitations: 1) an attending technician must carefully keep the system position synchronized with the bronchoscope position during the procedure; 2) extra bronchoscope tracking hardware may be required; 3) guidance cannot take place in real time; 4) the guidance system is unable to detect and correct faulty bronchoscope maneuvers; and 5) a resynchronization procedure must be followed after adverse events such as patient cough or dynamic airway collapse. Here, we propose an image-based system for technician-free bronchoscopy guidance that relies on two features. First, our system precomputes a guidance plan that suggests natural bronchoscope maneuvers at every bifurcation leading toward a region of interest (ROI). Second, our system enables bronchoscope position verification that relies on a global-registration algorithm to establish the global bronchoscope position and, thus, provide the physician with updated navigational information during bronchoscopy. The system can handle general navigation to an ROI, as well as adverse events, and is directly controlled by the physician by a foot pedal. Guided bronchoscopy results using airway-tree phantoms and human cases demonstrate the efficacy of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Minimally invasive surgery is a highly complex medical discipline with several difficulties for the surgeon. To alleviate these difficulties, augmented reality can be used for intraoperative assistance. For visualization, the endoscope pose must be known which can be acquired with a SLAM (Simultaneous Localization and Mapping) approach using the endoscopic images. In this paper we focus on feature tracking for SLAM in minimally invasive surgery. Robust feature tracking and minimization of false correspondences is crucial for localizing the endoscope. As sensory input we use a stereo endoscope and evaluate different feature types in a developed SLAM framework. The accuracy of the endoscope pose estimation is validated with synthetic and ex vivo data. Furthermore we test the approach with in vivo image sequences from da Vinci interventions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a navigation system for flexible endoscopes equipped with ultrasound scan heads. In contrast to similar systems additional abdominal 3D ultrasound images are used to achieve the required image fusion. An abdominal 3D US image has to be taken preoperatively before the CT scan. The CT is calibrated by means of the optical tracking system (OTS) and the transformation between CT and the calibrated 3D US can be calculated without image registration. Immediately before intervention takes place, a pre-interventional 3D US tracked with an electromagnetic tracking system (EMTS) is acquired and registered intra-modal to the preoperative US. Therefore, we can replace a direct 2D/3D registration from the endoscopic US to the pre-operative CT by an intra-modal USUS registration and tracker calibrations. To consider tissue deformation we implemented an approach using leading points. First, the US images were pre-processed by calculating importance images. The whole information of the image is then reduced to a set of expressive leading points calculated from the importance image. Once the vector field of corresponding leading points is found, a deformation field can be calculated. We found a target registration error for the whole transformation chain from a US pixel to a CT voxel of 4:34 ± 2:56mm (ten targets) on a phantom without deformation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image guided surgery (IGS) has led to significant advances in surgical procedures and outcomes. Endoscopic IGS is hindered, however, by the lack of suitable intraoperative scanning technology for registration with preoperative tomographic image data. This paper describes implementation of an endoscopic laser range scanner (eLRS) system for accurate, intraoperative mapping of the kidney surface, registration of the measured kidney surface with preoperative tomographic images, and interactive image-based surgical guidance for subsurface lesion targeting. The eLRS comprises a standard stereo endoscope coupled to a steerable laser, which scans a laser fan beam across the kidney surface, and a high-speed color camera, which records the laser-illuminated pixel locations on the kidney. Through calibrated triangulation, a dense set of 3-D surface coordinates are determined. At maximum resolution, the eLRS acquires over 300,000 surface points in less than 15 seconds. Lower resolution scans of 27,500 points are acquired in one second. Measurement accuracy of the eLRS, determined through scanning of reference planar and spherical phantoms, is estimated to be 0.38 ± 0.27 mm at a range of 2 to 6 cm. Registration of the scanned kidney surface with preoperative image data is achieved using a modified iterative closest point algorithm. Surgical guidance is provided through graphical overlay of the boundaries of subsurface lesions, vasculature, ducts, and other renal structures labeled in the CT or MR images, onto the eLRS camera image. Depth to these subsurface targets is also displayed. Proof of clinical feasibility has been established in an explanted perfused porcine kidney experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-Guided Radiation Therapy (IGRT) aims at increasing the precision of radiation dose delivery. In the context of prostate cancer, a planning Computed Tomography (CT) image with manually defined prostate and organs at risk (OAR) delineations is usually associated with daily Cone Beam Computed Tomography (CBCT) follow-up images. The CBCT images allow to visualize the prostate position and to reposition the patient accordingly. They also should be used to evaluate the dose received by the organs at each fraction of the treatment. To do so, the first step is a prostate and OAR segmentation on the daily CBCTs, which is very timeconsuming. To simplify this task, CT to CBCT non-rigid registration could be used in order to propagate the original CT delineations in the CBCT images. For this aim, we compared several non-rigid registration methods. They are all based on the Mutual Information (MI) similarity measure, and use a BSpline transformation model. But we add different constraints to this global scheme in order to evaluate their impact on the final results. These algorithms are investigated on two real datasets, representing a total of 70 CBCT on which a reference delineation has been realized. The evaluation is led using the Dice Similarity Coefficient (DSC) as a quality criteria. The experiments show that a rigid penalty term on the bones improves the final registration result, providing high quality propagated delineations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimodal registration of intraoperative ultrasound and preoperative contrast enhanced computed tomography (CT) imaging is the basis for image guided percutaneous hepatic interventions. Currently, the surgeon manually performs a rigid registration using vessel structures and other anatomical landmarks for visual guidance. We have previously presented our approach for an automation of this intraoperative registration step based on the definition of bijective correspondences between the vessel structures using an automatic graph matching.1 This paper describes our method for refinement and expansion of the matched vessel graphs, resulting in a high number of bijective correspondences. Based on these landmarks, we could extend our method to a fully deformable registration. Our system was applied successfully on CT and ultrasound data of nine patients, which are studied in this paper. The number of corresponding vessel points could be raised from a mean of 9.6 points after the graph matching to 70.2 points using the presented refinement method. This allows for the computation of a smooth deformation field. Furthermore, we can show that our deformation calculation raises the registration accuracy for 3 of the 4 chosen target vessels in pre-/postoperative CT with a mean accuracy improvement of 44%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fast knowledge-based radioactive seed localization method for brachytherapy was developed to automatically localize radioactive seeds in an intraoperative volumetric cone beam CT (CBCT) so that corrections, if needed, can be made during prostate implant surgery. A transrectal ultrasound (TRUS) scan is acquired for intraoperative treatment planning. Planned seed positions are transferred to intraoperative CBCT following TRUS-to-CBCT registration using a reference CBCT scan of the TRUS probe as a template, in which the probe and its external fiducial markers are pre-segmented and their positions in TRUS are known. The transferred planned seeds and probe serve as an atlas to reduce the search space in CBCT. Candidate seed voxels are identified based on image intensity. Regions are grown from candidate voxels and overlay regions are merged. Region volume and intensity variance is checked against known seed volume and intensity profile. Regions meeting the above criteria are flagged as detected seeds; otherwise they are flagged as likely seeds and sorted by a score that is based on volume, intensity profile and distance to the closest planned seed. A graphical interface allows users to review and accept or reject likely seeds. Likely seeds with approximately twice the seed volume are automatically split. Five clinical cases are tested. Without any manual correction in seed detection, the method performed the localization in 5 seconds (excluding registration time) for a CBCT scan with 512×512×192 voxels. The average precision rate per case is 99% and the recall rate is 96% for a total of 416 seeds. All false negative seeds are found with 15 in likely seeds and 1 included in a detected seed.
With the new method, updating of calculations of dose distribution during the procedure is possible and thus facilitating evaluation and improvement of treatment quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The lack of dynamic dosimetry tools for permanent prostate brachytherapy causes otherwise avoidable problems in prostate cancer patient care. The goal of this work is to satisfy this need in a readily adoptable manner. Using the ubiquitous ultrasound scanner and mobile non-isocentric C-arm, we show that dynamic dosimetry is now possible with only the addition of an arbitrarily configured marker-based fiducial. Not only is the system easily configured from accessible hardware, but it is also simple and convenient, requiring little training from technicians. Furthermore, the proposed system is built upon robust algorithms of seed segmentation, fiducial detection, seed reconstruction, and image registration. All individual steps of the pipeline have been thoroughly tested, and the system as a whole has been validated on a study of 25 patients. The system has shown excellent results of accurately computing dose, and does so with minimal manual intervention, therefore showing promise for widespread adoption of dynamic dosimetry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we present a novel, automated, registration method to fuse magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) images of the prostate. Our methodology consists of: (1) delineating the prostate on MRI, (2) building a probabilistic model of prostate location on TRUS, and (3) aligning the MRI prostate
segmentation to the TRUS probabilistic model. TRUS-guided needle biopsy is the current gold standard for prostate cancer (CaP) diagnosis. Up to 40% of CaP lesions appear isoechoic on TRUS, hence TRUS-guided biopsy cannot reliably target CaP lesions and is associated with a high false negative rate. MRI is better able to distinguish CaP from benign prostatic tissue, but requires special equipment and training. MRI-TRUS fusion, whereby MRI is acquired pre-operatively and aligned to TRUS during the biopsy procedure, allows for information from both modalities to be used to help guide the biopsy. The use of MRI and TRUS in combination to guide biopsy at least doubles the yield of positive biopsies. Previous work on MRI-TRUS fusion has involved aligning manually determined fiducials or prostate surfaces to achieve image registration. The accuracy of these methods is dependent on the reader’s ability to determine fiducials or prostate surfaces with minimal error, which is a difficult and time-consuming task. Our novel, fully automated MRI-TRUS fusion method represents a significant advance over the current state-of-the-art because it does not require manual intervention after TRUS acquisition. All necessary preprocessing steps (i.e. delineation of the prostate on MRI) can be performed offline prior to the biopsy procedure. We evaluated our method on seven patient studies, with B-mode TRUS and a 1.5 T surface coil MRI. Our method has a root mean square error (RMSE) for expertly selected fiducials (consisting of the urethra, calcifications, and the centroids of CaP nodules) of 3.39 ± 0.85 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of open abdominal image-guided liver surgery, the efficacy of an image-guidance system relies on its ability to (1) accurately depict tool locations with respect to the anatomy, and (2) maintain the work flow of the surgical team. Laser-range scanned (LRS) partial surface measurements can be taken intraoperatively with relatively little impact on the surgical work flow, as opposed to other intraoperative imaging modalities. Previous research has demonstrated that this kind of partial surface data may be (1) used to drive a rigid registration of the preoperative CT image volume to intraoperative patient space, and (2) extrapolated and combined with a tissue-mechanics-based organ model to drive a non-rigid registration, thus compensating for organ deformations. In this paper we present a novel approach for intraoperative nonrigid liver registration which iteratively reconstructs a displacement field on the posterior side of the organ in order to minimize the error between the deformed model and the intraopreative surface data. Experimental results with a phantom liver undergoing large deformations demonstrate that this method achieves target registration errors (TRE) with a mean of 4.0 mm in the prediction of a set of 58 locations inside the phantom, which represents a 50% improvement over rigid registration alone, and a 44% improvement over the prior non-iterative single-solve method of extrapolating boundary conditions via a surface Laplacian.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Minimally invasive catheter ablation has become the preferred treatment option for atrial fibrillation. Although the standard ablation procedure involves ablation points set by radio-frequency catheters, cryo-balloon catheters have even been reported to be more advantageous in certain cases. As electro-anatomical mapping systems do not support cryo-balloon ablation procedures, X-ray guidance is needed. However, current methods to provide support for cryo-balloon catheters in fluoroscopically guided ablation procedures rely heavily on manual user interaction. To improve this, we propose a first method for automatic cryo-balloon catheter localization in fluoroscopic images based on a blob detection algorithm. Our method is evaluated on 24 clinical images from 17 patients. The method successfully detected the cryoballoon in 22 out of 24 images, yielding a success rate of 91.6 %. The successful localization achieved an accuracy of 1.00 mm ± 0.44 mm. Even though our methods currently fails in 8.4 % of the images available, it still offers a significant improvement over manual methods. Furthermore, detecting a landmark point along the cryo-balloon catheter can be a very important step for additional post-processing operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Minimally invasive catheter ablation is a common treatment option for atrial fibrillation. A common treatment strategy is pulmonary vein isolation. In this case, individual ablation points need to be placed around the ostia of the pulmonary veins attached to the left atrium to generate transmural lesions and thereby block electric signals. To achieve a durable transmural lesion, the tip of the catheter has to be stable with a sufficient tissue contact during radio-frequency ablation. Besides the steerable interface operated by the physician, the movement of the catheter is also influenced by the heart and breathing motion - particularly during ablation. In this paper we investigate the influence of breathing motion on different areas of the endocardium during radio frequency ablation. To this end, we analyze the frequency spectrum of the continuous catheter contact force to identify areas with increased breathing motion using a classification method. This approach has been applied to clinical patient data acquired during three pulmonary vein isolation procedures. Initial findings show that motion due to respiration is more pronounced at the roof and around the right pulmonary veins.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In spite of significant efforts to enhance guidance for catheter navigation, limited research has been conducted to consider the changes that occur in the tissue during ablation as means to provide useful feedback on the progression of therapy delivery. We propose a technique to visualize lesion progression and monitor the effects of the RF energy delivery using a surrogate thermal ablation model. The model incorporates both physical and physiological tissue parameters, and uses heat transfer principles to estimate temperature distribution in the tissue and geometry of the generated lesion in near real time. The ablation model has been calibrated and evaluated using ex vivo beef muscle tissue in a clinically relevant ablation protocol. To validate the model, the predicted temperature distribution was assessed against that measured directly using fiberoptic temperature probes inserted in the tissue. Moreover, the model-predicted lesions were compared to the lesions observed in the post-ablation digital images. Results showed an agreement within 5°C between the model-predicted and experimentally measured tissue temperatures, as well as comparable predicted and observed lesion characteristics and geometry. These results suggest that the proposed technique is capable of providing reasonably accurate and sufficiently fast representations of the created RF ablation lesions, to generate lesion maps in near real time. These maps can be used to guide the placement of successive lesions to ensure continuous and enduring suppression of the arrhythmic pathway.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we investigated the benefit of the use of two lateral camera units additional to a central camera unit for 3D surface imaging for image guidance in deep-inspiration breath-hold (DIBH) radiotherapy by comparison with cone-beam computed tomography (CBCT). Ten patients who received DIBH radiotherapy after breast-conserving surgery were included. The performance of surface imaging using one and three camera units was compared to using CBCT for setup verification. Breast-surface registrations were performed for CBCT as well as for 3D surfaces, captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors an assessment of the group mean, systematic error, random error, and 95% limits of agreement was made. Correlations between derived surface-imaging [one camera unit;three camera units] and CBCT setup errors were: R2=[0.67;0.75], [0.76;0.87], [0.88;0.91] in left-right, cranio-caudal, and anterior-posterior direction, respectively. Group mean, systematic and random errors were slightly smaller (sub-millimeter differences) and the limits of agreement were 0.10 to 0.25cm tighter when using three camera units compared with one. For the majority of the data, the use of three camera units compared with one resulted in setup errors more similar to the CBCT derived setup errors for the craniocaudal and anterior-posterior directions (p<0.01, Wilcoxon-signed-ranks test). This study shows a better correlation and agreement between 3D surface imaging and CBCT when three camera units are used instead of one and further outlines the conditions under which the benefit of using three camera units is significant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Purpose: To evaluate the variability in heart position in deep-inspiration breath-hold (DIBH) radiotherapy for breast cancer when 3D surface imaging would be used for monitoring the depth of the breath hold during treatment. Materials and Methods: Ten patients who received DIBH radiotherapy after breast-conserving surgery (BCS) were included. Retrospectively, heart-based registrations were performed for cone-beam computed tomography (CBCT) to planning CT and breast surface registrations were performed for a 3D surface (two different regions of interest [ROIs]), captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis and receiver operating characteristic (ROC) analysis was performed to investigate the prediction quality of 3D surface imaging for 3D heart displacement. Further, the residual setup errors (systematic [Σ] and random [σ]) of the heart were estimated relative to the surface registrations. Results: When surface imaging [ROIleft-side;ROIboth-sides] would be used for monitoring, the residual errors of the heart position are in left-right: Σ=[0.36;0.12], σ=[0.16;0.14]; cranio-caudal: Σ=[0.54;0.54], σ=[0.28;0.31]; and in anteriorposterior: Σ=[0.18;0.14], σ=[0.20;0.19] cm. Correlations between setup errors were: R2 = [0.23;0.73], [0.67;0.65], [0.65;0.73] in left-right, cranio-caudal, and anterior-posterior direction, respectively. ROC analysis resulted in an area under the ROC curve of [0.82;0.78]. Conclusion: The use of ROIboth-sides provided promising results. However, considerable variability in the heart position, particularly in CC direction, is observed when 3D surface imaging would be used for guidance in DIBH radiotherapy after BCS. Planning organ at risk volume margins should be used to take into account the heart-position variability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel robotic approach for the rapid, minimally invasive treatment of Intracerebral Hemorrhage (ICH), in which a hematoma or blood clot arises in the brain parenchyma. We present a custom image-guided robot system that delivers a steerable cannula into the lesion and aspirates it from the inside. The steerable cannula consists of an initial straight tube delivered in a manner similar to image-guided biopsy (and which uses a commercial image guidance system), followed by the sequential deployment of multiple individual precurved elastic tubes. Rather than deploying the tubes simultaneously, as has been done in nearly all prior studies, we deploy the tubes one at a time, using a compilation of their individual workspaces to reach desired points inside the lesion. This represents a new paradigm in active cannula research, defining a novel procedure-planning problem. A design that solves this problem can potentially save many lives by enabling brain decompression both more rapidly and less invasively than is possible through the traditional open surgery approach. Experimental results include a comparison of the simulated and actual workspaces of the prototype robot, and an accuracy evaluation of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A cochlear implant (CI) is a neural prosthetic device that restores hearing by directly stimulating the auditory nerve with an electrode array. In CI surgery, the surgeon threads the electrode array into the cochlea, blind to internal structures. We have recently developed algorithms for determining the position of CI electrodes relative to intra-cochlear anatomy using pre- and post-implantation CT. We are currently using this approach to develop a CI programming assistance system that uses knowledge of electrode position to determine a patient-customized CI sound processing strategy. However, this approach cannot be used for the majority of CI users because the cochlea is obscured by image artifacts produced by CI electrodes and acquisition of pre-implantation CT is not universal. In this study we propose an approach that extends our techniques so that intra-cochlear anatomy can be segmented for CI users for which pre-implantation CT was not acquired. The approach achieves automatic segmentation of intra-cochlear anatomy in post-implantation CT by exploiting intra-subject symmetry in cochlear anatomy across ears. We validated our approach on a dataset of 10 ears in which both pre- and post-implantation CTs were available. Our approach results in mean and maximum segmentation errors of 0.27 and 0.62 mm, respectively. This result suggests that our automatic segmentation approach is accurate enough for developing customized CI sound processing strategies for unilateral CI patients based solely on postimplantation CT scans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Purpose: An increasingly popular minimally invasive approach to resection of oropharyngeal / base-of-tongue cancer is made possible by a transoral technique conducted with the assistance of a surgical robot. However, the highly deformed surgical setup (neck flexed, mouth open, and tongue retracted) compared to the typical patient orientation in preoperative images poses a challenge to guidance and localization of the tumor target and adjacent critical anatomy. Intraoperative cone-beam CT (CBCT) can account for such deformation, but due to the low contrast of soft-tissue in CBCT images, direct localization of the target and critical tissues in CBCT images can be difficult. Such structures may be more readily delineated in preoperative CT or MR images, so a method to deformably register such information to intraoperative CBCT could offer significant value. This paper details the initial implementation of a deformable registration framework to align preoperative images with the deformed intraoperative scene and gives preliminary evaluation of the geometric accuracy of registration in CBCT-guided TORS. Method: The deformable registration aligns preoperative CT or MR to intraoperative CBCT by integrating two established approaches. The volume of interest is first segmented (specifically, the region of the tongue from the tip to the hyoid), and a Gaussian mixture (GM) mode1 of surface point clouds is used for rigid initialization (GMRigid) as well as an initial deformation (GMNonRigid). Next, refinement of the registration is performed using the Demons algorithm applied to distance transformations of the GM-registered and CBCT volumes. The registration accuracy of the framework was quantified in preliminary studies using a cadaver emulating preoperative and intraoperative setups. Geometric accuracy of registration was quantified in terms of target registration error (TRE) and surface distance error. Result: With each step of the registration process, the framework demonstrated improved registration, achieving mean TRE of 3.0 mm following the GM rigid, 1.9 mm following GM nonrigid, and 1.5 mm at the output of the registration process. Analysis of surface distance demonstrated a corresponding improvement of 2.2, 0.4, and 0.3 mm, respectively. The evaluation of registration error revealed the accurate alignment in the region of interest for base-of-tongue robotic surgery owing to point-set selection in the GM steps and refinement in the deep aspect of the tongue in the Demons step. Conclusions: A promising framework has been developed for CBCT-guided TORS in which intraoperative CBCT provides a basis for registration of preoperative images to the highly deformed intraoperative setup. The registration framework is invariant to imaging modality (accommodating preoperative CT or MR) and is robust against CBCT intensity variations and artifact, provided corresponding segmentation of the volume of interest. The approach could facilitate overlay of preoperative planning data directly in stereo-endoscopic video in support of CBCT-guided TORS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image-guided neurosurgery, intraoperative brain shift significantly degrades the accuracy of neuronavigation that is solely based on preoperative magnetic resonance images (pMR). To compensate for brain deformation and to maintain the accuracy in image guidance achieved at the start of surgery, biomechanical models have been developed to simulate brain deformation and to produce model-updated MR images (uMR) to compensate for brain shift. To-date, most studies have focused on shift compensation at early stages of surgery (i.e., updated images are only produced after craniotomy and durotomy). Simulating surgical events at later stages such as retraction and tissue resection are, perhaps, clinically more relevant because of the typically much larger magnitudes of brain deformation. However, these surgical events are substantially more complex in nature, thereby posing significant challenges in model-based brain shift compensation strategies. In this study, we present results from an initial investigation to simulate retractor-induced brain deformation through a biomechanical finite element (FE) model where whole-brain deformation assimilated from intraoperative data was used produce uMR for improved accuracy in image guidance. Specifically, intensity-encoded 3D surface profiles at the exposed cortical area were reconstructed from intraoperative stereovision (iSV) images before and after tissue retraction. Retractor-induced surface displacements were then derived by coregistering the surfaces and served as sparse displacement data to drive the FE model. With one patient case, we show that our technique is able to produce uMR that agrees well with the reconstructed iSV surface after retraction. The computational cost to simulate retractor-induced brain deformation was approximately 10 min. In addition, our approach introduces minimal interruption to the surgical workflow, suggesting the potential for its clinical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motivation: The existing visualization of the Camera augmented mobile C-arm (CamC) system does not have enough cues for depth information and presents the anatomical information in a confusing way to surgeons. Methods: We propose a method that segments anatomical information from X-ray and then augment it on the video images. To provide depth cues, pixels belonging to video images are classified as skin and object classes. The augmentation of anatomical information from X-ray is performed only when pixels have a larger probability of belonging to skin class. Results: We tested our algorithm by displaying the new visualization to 2 expert surgeons and 1 medical student during three surgical workflow sequences of the interlocking of intramedullary nail procedure, namely: skin incision, center punching, and drilling. Via a survey questionnaire, they were asked to assess the new visualization when compared to the current alphablending overlay image displayed by CamC. The participants all agreed (100%) that occlusion and instrument tip position detection were immediately improved with our technique. When asked if our visualization has potential to replace the existing alpha-blending overlay during interlocking procedures, all participants did not hesitate to suggest an immediate integration of the visualization for the correct navigation and guidance of the procedure. Conclusion: Current alpha blending visualizations lack proper depth cues and can be a source of confusion for the surgeons when performing surgery. Our visualization concept shows great potential in alleviating occlusion and facilitating clinician understanding during specific workflow steps of the intramedullary nailing procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Abdominal aortic aneurysms are a common disease of the aorta which are treated minimally invasive in about 33 % of the cases. Treatment is done by placing a stent graft in the aorta to prevent the aneurysm from growing. Guidance during the procedure is facilitated by fluoroscopic imaging. Unfortunately, due to low soft tissue contrast in X-ray images, the aorta itself is not visible without the application of contrast agent. To overcome this issue, advanced techniques allow to segment the aorta from pre-operative data, such as CT or MRI. Overlay images are then subsequently rendered from a mesh representation of the segmentation and fused to the live fluoroscopic images with the aim of improving the visibility of the aorta during the procedure. The current overlay images typically use forward projections of the mesh representation. This fusion technique shows deficiencies in both the 3-D information of the overlay and the visibility of the fluoroscopic image underneath. We present a novel approach to improve the visualization of the overlay images using non-photorealistic rendering techniques. Our method preserves the visibility of the devices in the fluoroscopic images while, at the same time, providing 3-D information of the fused volume. The evaluation by clinical experts shows that our method is preferred over current state-of-the-art overlay techniques. We compared three visualization techniques to the standard visualization. Our silhouette approach was chosen by clinical experts with 67 %, clearly showing the superiority of our new approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D segmentation of the prostate in medical images is useful to prostate cancer diagnosis and therapy guidance, but is time-consuming to perform manually. Clinical translation of computer-assisted segmentation algorithms for this purpose requires a comprehensive and complementary set of evaluation metrics that are informative to the clinical end user. We have developed an interactive 3D prostate segmentation method for 1.5T and 3.0T T2-weighted magnetic resonance imaging (T2W MRI) acquired using an endorectal coil. We evaluated our method against manual segmentations of 36 3D images using complementary boundary-based (mean absolute distance; MAD), regional overlap (Dice similarity coefficient; DSC) and volume difference (ΔV) metrics. Our technique is based on inter-subject prostate shape and local boundary appearance similarity. In the training phase, we calculated a point distribution model (PDM) and a set of local mean intensity patches centered on the prostate border to capture shape and appearance variability. To segment an unseen image, we defined a set of rays – one corresponding to each of the mean intensity patches computed in training – emanating from the prostate centre. We used a radial-based search strategy and translated each mean intensity patch along its corresponding ray, selecting as a candidate the boundary point with the highest normalized cross correlation along each ray. These boundary points were then regularized using the PDM. For the whole gland, we measured a mean±std MAD of 2.5±0.7 mm, DSC of 80±4%, and ΔV of 1.1±8.8 cc. We also provided an anatomic breakdown of these metrics within the prostatic base, mid-gland, and apex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the commonly used treatment methods for early-stage prostate cancer is brachytherapy. The standard of care for planning this procedure is segmentation of contours from transrectal ultrasound (TRUS) images, which closely follow the prostate boundary. This process is currently performed either manually or using semi-automatic techniques. This paper introduces a fully automatic segmentation algorithm which uses a priori knowledge of contours in a reference data set of TRUS volumes. A non-parametric deformable registration method is employed to transform the atlas prostate contours to a target image coordinates. All atlas images are sorted based on their registration results and the highest ranked registration results are selected for decision fusion. A Simultaneous Truth and Performance Level Estimation algorithm is utilized to fuse labels from registered atlases and produce a segmented target volume. In this experiment, 50 patient TRUS volumes are obtained and a leave-one-out study on TRUS volumes is reported. We also compare our results with a state-of-the-art semi-automatic prostate segmentation method that has been clinically used for planning prostate brachytherapy procedures and we show comparable accuracy and precision within clinically acceptable runtime.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Segmentation of the spinal column from CT images is a pre-processing step for a range of image guided interventions. Current techniques focus on identification and separate segmentation of each vertebra. Recently, statistical multi-object shape models have been introduced to extract common statistical characteristics between several anatomies. These models are also used for segmentation purposes and are shown to be robust, accurate, and computationally tractable. In this paper, we reconstruct a statistical multi-vertebrae shape+pose model and propose a novel technique to register such a model to CT images. We validate our technique in terms of accuracy of the multi-vertebrae segmentation of CT images acquired from 16 subjects. The mean distance error achieved for all vertebrae is 1.17 mm with standard deviation of 0.38 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Minimally/Non-invasive surgery has become increasingly widespread because of its therapeutic benefits such as less
pain, less scarring, and shorter hospital stay. However, it is very difficult to eliminate the target cancer cells selectively
without damaging nearby normal tissues and vessels since the tumors inside organs cannot be visually tracked in realtime with the existing imaging devices while organs are deformed by respiration and surgical instruments. Note that realtime 2D US imaging is widely used for monitoring the minimally invasive surgery such as Radiofrequency ablation; however, it is difficult to detect target tumors except high-echogenic regions because of its noisy and limited field of view. To handle these difficulties, we present a novel framework for estimating organ motion and deformed shape during respiration from the available features of 2D US images, by means of inverse kinematics utilizing 3D CT volumes at the inhale and exhale phases. First, we generate surface meshes of the target organ and tumor as well as centerlines of vessels at the two extreme phases considering surface correspondence. Then, the corresponding tetrahedron meshes are generated by coupling the internal components for volumetric modeling. Finally, a deformed organ mesh at an arbitrary phase is generated from the 2D US feature points for estimating the organ deformation and tumor position. To show effectiveness of the proposed method, the CT scans from real patient has been tested for estimating the motion and deformation of the liver. The experimental result shows that the average errors are less than 3mm in terms of tumor position as well as the whole surface shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Magnetic resonance imaging is often used as a source for reconstructing vascular anatomy for the purpose of computational fluid dynamics (CFD) analysis. We recently observed large discrepancies in such “image-based” CFD models of the normal common carotid artery (CCA) derived from contrast enhanced MR angiography (CEMRA), when
compared to phase contrast MR imaging (PCMRI) of the same subjects. A novel quantitative comparison of velocity profile shape of N=20 cases revealed an average 25% overestimation of velocities by CFD, attributed to a corresponding underestimation of lumen area in the CEMRA-derived geometries. We hypothesized that this was due to blurring of edges in the images caused by dilution of contrast agent during the relatively long elliptic centric CEMRA acquisitions, and confirmed this with MRI simulations. Rescaling of CFD models to account for the lumen underestimation improved agreement with the velocity levels seen in the corresponding PCMRI images, but discrepancies in velocity profile shape remained, with CFD tending to over-predict velocity profile skewing. CFD simulations incorporating realistic inlet velocity profiles and non-Newtonian rheology had a negligible effect on velocity profile skewing, suggesting a role for other sources of error or modeling assumptions. In summary, our findings suggest that caution should be exercised when using elliptic-centric CEMRA data as a basis for image-based CFD modeling, and emphasize the importance of comparing image-based CFD models against in vivo data whenever possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Introduction: Image-guided minimally invasive procedures are becoming increasingly popular. Currently, High-Intensity Focused Ultrasound (HIFU) treatment of lesions in mobile organs, such as the liver, is in development. A requirement for such treatment is automatic motion tracking, such that the position of the lesion can be
followed in real time. We propose a 4D liver motion model, which can be used during planning of this procedure. During treatment, the model can serve as a motion predictor. In a similar fashion, this model could be used for radiotherapy treatment of the liver. Method: The model is built by acquiring 2D dynamic sagittal MRI data at six locations in the liver. By registering these dynamics to a 3D MRI liver image, 2D deformation fields are obtained at every location. The 2D fields are ordered according to the position of the liver at that specific time point, such that liver motion during an average breathing period can be simulated. This way, a sparse deformation field is created over time. This deformation field is finally interpolated over the entire volume, yielding a 4D motion model. Results: The accuracy of the model is evaluated by comparing unseen slices to the slice predicted by the model at that specific location and phase in the breathing cycle. The mean Dice coefficient of the liver regions was
0.90. The mean misalignment of the vessels was 1.9 mm. Conclusion: The model is able to predict patient specific deformations of the liver and can predict regular motion accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of biomechanical models to correct the misregistration due to deformation in image guided neurosurgical systems has been a growing area of investigation. In previous work, an atlas-based inverse model was developed to account for soft-tissue deformations during image-guided surgery. Central to that methodology is a considerable amount of pre-computation and planning. The goal of this work is to evaluate techniques that could potentially reduce that burden. Distinct from previous manual techniques, an automated segmentation technique is described for the cerebrum and dural septa. The shift correction results using this automated segmentation method were compared to those using the manual methods. In addition, the extent and distribution of the surgical parameters associated with the deformation atlas were investigated by a sensitivity analysis using simulation experiments and clinical data. The shift correction results did not change significantly using the automated method (correction of 73±13% ) as compared to the semi-automated method from previous work (correction of 76±13%). The results of the sensitivity analysis show that the atlas could be constructed by coarser sampling (six fold reduction) without substantial degradation in the shift reconstruction, a decrease in preoperative computational time from 13.1±3.5 hours to 2.2±0.6 hours. The automated segmentation technique and the findings of the sensitivity study have significant impact on the reduction of pre-operative computational time, improving the utility of the atlas-based method. The work in this paper suggests that the atlas-based technique can become a ‘time of surgery’ setup procedure rather than a pre-operative computing strategy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been demonstrated that the acceleration signal has potential to monitor heart function and adaptively optimize Cardiac Resynchronization Therapy (CRT) systems. In this paper, we propose a non-invasive method for computing myocardial acceleration from 3D echocardiographic sequences. Displacement of the myocardium was estimated using a two-step approach: (1) 3D automatic segmentation of the myocardium at end-diastole using 3D Active Shape Models (ASM); (2) propagation of this segmentation along the sequence using non-rigid 3D+t image registration (temporal di eomorphic free-form-deformation, TDFFD). Acceleration was obtained locally at each point of the myocardium from local displacement. The framework has been tested on images from a realistic physical heart phantom (DHP-01, Shelley Medical Imaging Technologies, London, ON, CA) in which the displacement of some control regions was known. Good correlation has been demonstrated between the estimated displacement function from the algorithms and the phantom setup. Due to the limited temporal resolution, the acceleration signals are sparse and highly noisy. The study suggests a non-invasive technique to measure the cardiac acceleration that may be used to improve the monitoring of cardiac mechanics and optimization of CRT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate measurement of soft tissue material properties is critical for characterizing its biomechanical behaviors but can be challenging especially for the human brain. Recently, we have applied stereovision to track motion of the exposed cortical surface noninvasively for patients undergoing open skull neurosurgical operations. In this paper, we conduct a proof-of-concept study to evaluate the feasibility of the technique in measuring material properties of soft tissue in vivo using a tofu phantom. A block of soft tofu was prepared with black pepper randomly sprinkled on the top surface to provide texture to facilitate image-based displacement mapping. A disk-shaped indenter made of high-density tungsten was placed on the top surface to induce deformation through its weight. Stereoscopic images were acquired before and after indentation using a pair of stereovision cameras mounted on a surgical microscope with its optical path perpendicular to the imaging surface. Rectified left camera images obtained from stereovision reconstructions were then co-registered using optical flow motion tracking from which a 2D surface displacement field around the indenter disk was derived. A corresponding finite element model of the tofu was created subjected to the indenter weight and a hyperelastic material model was chosen to account for large deformation around the intender edges. By successively assigning different shear stiffness constant, computed tofu surface deformation was obtained, and an optimal shear stiffness was obtained that matched the model-derived surface displacements with those measured from the images. The resulting quasi-static, long-term shear stiffness for the tofu was 1.04 k Pa, similar to that reported in the literature. We show that the stereovision and free-weight indentation techniques coupled with an FE model are feasible for in vivo measurement of the human brain material properties, and it may also be feasible for other soft tissues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In orthopedic and trauma surgery, metallic plates are used for reduction and fixation of bone fractures. In clinical practice, the intra-operative planning for screw fixation is usually based on fluoroscopic images. Screw fixation is then performed on a free-hand basis. As such, multiple attempts may be required in order to achieve an optimal positioning of the fixing screws. To help the physician insert the screws in accordance to the planned position, we propose a method for screw insertion guidance. Our approach uses a small video camera, rigidly placed on the drill, and a set of small markers that are rigidly fixed on a variable angle drill sleeve. In order to investigate the achievable accuracy of our setup, we simulate the estimation of the drill bit position under two different marker arrangements, planar and 3D, and different noise levels. Furthermore, we motivate our choices for marker design and position given the limited space available for marker positioning, the requirement for accurate position estimation of the drill bit and the illumination changes that could affect the surgical site. We also describe our proposed marker detection and tracking pipeline. Our simulation results let us conclude that we can achieve an accuracy of 1° and 1mm in the estimation of angular orientation and tip position of the drill bit respectively, provided that we have accurate marker detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Purpose: Accurate assessment of the size and location of osteolytic regions is essential in minimally invasive hip revision surgery. Moreover, image-guided robotic intervention for osteolysis treatment requires precise localization of implant components. However, high density metallic implants in proximity to the hip make assessment by either 2D or 3D x-ray imaging difficult. This paper details the initial implementation and evaluation of an advanced model-based conebeam CT (CBCT) reconstruction algorithm to improve guidance and assessment of hip osteolysis treatment. Method: A model-based reconstruction approach called Known Component Reconstruction (KCR) was employed to obtain high-quality reconstruction of regions neighboring metallic implants. KCR incorporates knowledge about the implant shape and material to precisely reconstruct surrounding anatomy while simultaneously estimating implant position. A simulation study involving a phantom generated from a CBCT scan of a cadaveric hip was performed. Registration accuracy in KCR iterations was evaluated as translational and rotational error from the true registration. Improvement in image quality was evaluated using normalized cross correlation (NCC) in two regions of interest (ROIs) about the femoral and acetabular components. Result: The study showed significant improvement in image quality over conventional filtered backprojection (FBP) and penalized-likelihood (PL) reconstruction. The NCC in the two ROIs improved from 0.74 and 0.81 (FBP) to 0.98 and 0.86 (PL) and >0.99 for KCR. The registration error was 0.01 mm in translation (0.02° in rotation) for the acetabular component and 0.01 mm (0.01° rotation) for the femoral component. Conclusions: Application of KCR to imaging hip osteolysis in the presence of the implant offers a promising step toward quantitative assessment in minimally invasive image-guided osteolysis treatment. The method improves image quality (metal artifact reduction), yields a precise registration estimate of the implant, and offers a means for reducing radiation dose in intraoperative CBCT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, a minimally invasive surgery (MIS) called fetoscopic tracheal occlusion (FETO) was developed to treat severe congenital diaphragmatic hernia (CDH) via fetoscopy, by which a detachable balloon is placed into the fetal trachea for preventing pulmonary hypoplasia through increasing the pressure of the chest cavity. This surgery is so dangerous that a supporting system for navigating surgeries is deemed necessary. In this paper, to guide a surgical tool to be inserted into the fetal trachea, an automatic approach is proposed to detect and track the fetal face and mouth via fetoscopic video sequencing. More specifically, the AdaBoost algorithm is utilized as a classifier to detect the fetal face based on Haarlike features, which calculate the difference between the sums of the pixel intensities in each adjacent region at a specific location in a detection window. Then, the CamShift algorithm based on an iterative search in a color histogram is applied to track the fetal face, and the fetal mouth is fitted by an ellipse detected via an improved iterative randomized Hough transform approach. The experimental results demonstrate that the proposed automatic approach can accurately detect and track the fetal face and mouth in real-time in a fetoscopic video sequence, as well as provide an effective and timely feedback to the robot control system of the surgical tool for FETO surgeries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we have developed a novel knowledge-driven quasi-global method for fast and robust registration of thoracic-abdominal CT and cone beam CT (CBCT) scans. While the use of CBCT in operating rooms has become a common practice, there is an increasing demand on the registration of CBCT with pre-operative scans, in many cases, CT scans. One of the major challenges of thoracic-abdominal CT/CBCT registration is from various fields of view (FOVs) of the two imaging modalities. The proposed approach utilizes a priori knowledge of anatomy to generate 2D anatomy targeted projection (ATP) images that surrogate the original volumes. The use of lower dimension surrogate images can significantly reduce the computation cost of similarity evaluation during optimization and make it practically feasible to perform global optimization based registration for image-guided interventional procedures. Another a priori knowledge about the local optima distribution on energy curves is further used to effectively select multi-starting points for registration optimization. 20 clinical data sets were used to validate the method and the target registration error (TRE) and maximum registration error (MRE) were calculated to compare the performance of the knowledge-driven quasi-global registration against a typical local-search based registration. The local search based registration failed on 60% cases, with an average TRE of 22.9mm and MRE of 28.1mm; the knowledge-driven quasi-global registration achieved satisfactory results for all the 20 data sets, with an average TRE of 3.5mm, and MRE of 2.6mm. The average computation time for the knowledge-driven quasi-global registration is 8.7 seconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For transcatheter-based minimally invasive procedures in structural heart disease ultrasound and X-ray are the two enabling imaging modalities. A live fusion of both real-time modalities can potentially improve the workflow and the catheter navigation by combining the excellent instrument imaging of X-ray with the high-quality soft tissue imaging of ultrasound. A recently published approach to fuse X-ray fluoroscopy with trans-esophageal echo (TEE) registers the ultrasound probe to X-ray images by a 2D-3D registration method which inherently provides a registration of ultrasound images to X-ray images. In this paper, we significantly accelerate the 2D-3D registration method in this context. The main novelty is to generate the projection images (DRR) of the 3D object not via volume ray-casting but instead via a fast rendering of triangular meshes. This is possible, because in the setting for TEE/X-ray fusion the 3D geometry of the ultrasound probe is known in advance and their main components can be described by triangular meshes. We show that the new approach can achieve a speedup factor up to 65 and does not affect the registration accuracy when used in conjunction with the gradient correlation similarity measure. The improvement is independent of the underlying registration optimizer. Based on the results, a TEE/X-ray fusion could be performed with a higher frame rate and a shorter time lag towards real-time registration performance. The approach could potentially accelerate other applications of 2D-3D registrations, e.g. the registration of implant models with X-ray images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is often necessary to register partial objects in medical imaging. Due to limited field of view (FOV), the entirety of an object cannot always be imaged. This study presents a novel application of an existing registration algorithm to this problem. The spin-image algorithm [1] creates pose-invariant representations of global shape with respect to individual mesh vertices. These ‘spin-images,’ are then compared for two different poses of the same object to establish correspondences and subsequently determine relative orientation of the poses. In this study, the spin-image algorithm is applied to 4DCT-derived capitate bone surfaces to assess the relative accuracy of registration with various amounts of geometry excluded.
The limited longitudinal coverage under the 4DCT technique (38.4mm, [2]), results in partial views of the capitate when imaging wrist motions. This study assesses the ability of the spin-image algorithm to register partial bone surfaces by artificially restricting the capitate geometry available for registration. Under IRB approval, standard static CT and 4DCT scans were obtained on a patient. The capitate was segmented from the static CT and one phase of 4DCT in which the whole bone was available. Spin-image registration was performed between the static and 4DCT. Distal portions of the 4DCT capitate (10-70%) were then progressively removed and registration was repeated. Registration accuracy was evaluated by angular errors and the percentage of sub-resolution fitting. It was determined that 60% of the distal capitate could be omitted without appreciable effect on registration accuracy using the spin-image algorithm (angular error < 1.5 degree, sub-resolution fitting < 98.4%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a framework for measuring TRE at the tip of an optically tracked pointing stylus. Our approach relied on a robotic manipulator equipped with a spherical wrist to collect large amounts of tracking data from well defined paths. Fitting the tracking data to planes, circles, and spheres allowed us to derive estimates of FLE and precisely localize target locations. A preliminary analysis of our data suggested that there was bias in the registered pointer tip location that depended on the tilt angle of the coordinate reference frame with respect to the tracking system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In interventional radiology, various navigation technologies have emerged aiming to improve the accuracy of device deployment and potentially the clinical outcomes of minimally invasive procedures. While these technologies’ performance has been explored extensively, their impact on daily clinical practice remains undetermined due to the additional cost and complexity, modification of standard devices (e.g. electromagnetic tracking), and different levels of experience among physicians. Taking these factors into consideration, a robotic laser guidance system for percutaneous needle placement is developed. The laser guidance system projects a laser guide line onto the skin entry point of the patient, helping the physician to align the needle with the planned path of the preoperative CT scan. To minimize changes to the standard workflow, the robot is integrated with the CT scanner via optical tracking. As a result, no registration between the robot and CT is needed. The robot can compensate for the motion of the equipment and keep the laser guide line aligned with the biopsy path in real-time. Phantom experiments showed that the guidance system can benefit physicians at different skill levels, while clinical studies showed improved accuracy over conventional freehand needle insertion. The technology is safe, easy to use, and does not involve additional disposable costs. It is our expectation that this technology can be accepted by interventional radiologists for CT guided needle placement procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intra-operative medical imaging enables incorporation of human experience and intelligence in a controlled, closed-loop fashion. Magnetic resonance imaging (MRI) is an ideal modality for surgical guidance of diagnostic and therapeutic procedures, with its ability to perform high resolution, real-time, high soft tissue contrast imaging without ionizing radiation. However, for most current image-guided approaches only static pre-operative images are accessible for guidance, which are unable to provide updated information during a surgical procedure. The high magnetic field, electrical interference, and limited access of closed-bore MRI render great challenges to developing robotic systems that can perform inside a diagnostic high-field MRI while obtaining interactively updated MR images. To overcome these limitations, we are developing a piezoelectrically actuated robotic assistant for actuated percutaneous prostate interventions under real-time MRI guidance. Utilizing a modular design, the system enables coherent and straight forward workflow for various percutaneous interventions, including prostate biopsy sampling and brachytherapy seed placement, using various needle driver configurations. The unified workflow compromises: 1) system hardware and software initialization, 2) fiducial frame registration, 3) target selection and motion planning, 4) moving to the target and performing the intervention (e.g. taking a biopsy sample) under live imaging, and 5) visualization and verification. Phantom experiments of prostate biopsy and brachytherapy were executed under MRI-guidance to evaluate the feasibility of the workflow. The robot successfully performed fully actuated biopsy sampling and delivery of simulated brachytherapy seeds under live MR imaging, as well as precise delivery of a prostate brachytherapy seed distribution with an RMS accuracy of 0.98mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During prostate needle insertion, the gland rotates and displaces resulting in needle placement inaccuracy. To compensate for this error, we proposed master-slave needle steering under real-time MRI in a previous study. For MRI-compatibility and accurate motion control, the master (and the slave) robot uses piezo actuators. These actuators
however, are non-backdrivable. To cope with this issue, force sensor is required. Force sensor is also required at the slave side to reflect the insertion force to clinician’s hand through the master robot. Currently, there is no MRI-compatible force sensor commercially available. In order to generate a combination of linear and rotary motions for needle steering, this study is seeking to develop a MRI-compatible 2 Degrees of Freedom (DOF) force/torque sensor. Fiber Brag Grating (FBG) strain measuring sensors which are proven to be MRI-compatible are used. The active element is made of phosphor-bronze and other parts are made of brass. The force and torque measurements are designed to be entirely decoupled. The sensor measures -20 to 20 N axial force with 0.1 N resolution, and -200 to 200 Nmm axial torque with 1 Nmm resolution. Analytical and Finite Element (FE) analyses are performed to ensure the strains are within the measurable range of the FBG sensors. The sensor is designed to be compact (diameter =15 mm, height =20 mm) and easy to handle and install. The proposed sensor is fabricated and calibrated using a commercial force/torque sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Keynote and Digital Operating Room and Knowledge Integration in the OR: Joint Session Conferences 8671 and 8674
Dartmouth and Medtronic have established an academic-industrial partnership to develop, validate, and evaluate a multimodality neurosurgical image-guidance platform for brain tumor resection surgery that is capable of updating the spatial relationships between preoperative images and the current surgical field. Previous studies have shown that brain shift compensation through a modeling framework using intraoperative ultrasound and/or visible light stereovision to update preoperative MRI appears to result in improved accuracy in navigation. However, image updates have thus far only been produced retrospective to surgery in large part because of gaps in the software integration and information flow between the co-registration and tracking, image acquisition and processing, and image warping tasks which are required during a case. This paper reports the first demonstration of integration of a deformation-based image updating process for brain shift modeling with an industry-standard image guided surgery platform. Specifically, we have completed the first and most critical data transfer operation to transmit volumetric image data generated by the Dartmouth brain shift modeling process to the Medtronic StealthStation® system. StealthStation® comparison views, which allow the surgeon to verify the correspondence of the received updated image volume relative to the preoperative MRI, are presented, along with other displays of image data such as the intraoperative 3D ultrasound used to update the model. These views and data represent the first time that externally acquired and manipulated image data has been imported into the StealthStation® system through the StealthLink® portal and visualized on the StealthStation® display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Context-aware technologies have great potential to help surgeons during laparoscopic interventions. Their underlying idea is to create systems which can adapt their assistance functions automatically to the situation in the OR, thus relieving surgeons from the burden of managing computer assisted surgery devices manually. To this purpose, a certain kind of understanding of the current situation in the OR is essential. Beyond that, anticipatory knowledge of incoming events is beneficial, e.g. for early warnings of imminent risk situations. To achieve the goal of predicting surgical events based on previously observed ones, we developed a language to describe surgeries and surgical events using Description Logics and integrated it with methods from computational linguistics. Using n-Grams to compute probabilities of followup events, we are able to make sensible predictions of upcoming events in real-time. The system was evaluated on professionally recorded and labeled surgeries and showed an average prediction rate of 80%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many important applications in clinical medicine can benefit from the fusion of spectroscopy data with anatomical
images. For example, the correlation of metabolite profiles with specific regions of interest in anatomical tumor images
can be useful in characterizing and treating heterogeneous tumors that appear structurally homogeneous. Such
applications can build on the correlation of data from in-vivo Proton Magnetic Resonance Spectroscopy Imaging (1HMRSI) with data from genetic and ex-vivo Nuclear Magnetic Resonance spectroscopy. To establish that correlation, tissue samples must be neurosurgically extracted from specifically identified locations with high accuracy. Toward that end, this paper presents new neuronavigation technology that enhances current clinical capabilities in the context of neurosurgical planning and execution. The proposed methods improve upon the current state-of-the-art in neuronavigation through the use of detailed three dimensional (3D) 1H-MRSI data. MRSI spectra are processed and analyzed, and specific voxels are selected based on their chemical contents. 3D neuronavigation overlays are then generated and applied to anatomical image data in the operating room. Without such technology, neurosurgeons must rely on memory and other qualitative resources alone for guidance in accessing specific MRSI-identified voxels. In contrast, MRSI-based overlays provide quantitative visual cues and location information during neurosurgery. The proposed methods enable a progressive new form of online MRSI-guided neuronavigation that we demonstrate in this study through phantom validation and clinical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound Image Guidance: Joint Session with Conferences 8671 and 8675
PURPOSE: Ultrasound-guided tracked navigation requires spatial calibration between the ultrasound beam and the tracker. We examined the reproducibility and accuracy of two popular open source calibration methods1 with a handheld linear ultrasound transducer. METHODS: A total of 10 calibrations were performed using (1) a double N-wire phantom with automatic image segmentation and registration; (2) and registration of landmark points collected with a tracked pointer. Reproducibility and accuracy were characterized by comparing the resulting transformation matrices, and by comparing ground truth landmark points. RESULTS: Transformation matrices calculated with an N-wire phantom showed a variance of X: 0.02 mm (in the direction of sound propagation), Y: 0.03 mm (in the direction of transducer elements) and Z: 0.21 mm (in the elevation direction). Transformation matrices obtained with tracked pointer showed a variance of X: 0.1 mm, Y: 0.10 mm and Z: 0.43 mm. Calibration accuracy was tested with ground truth cross wire points. The N-wire phantom provided a calibration with a distance from ground truth of X: 2.44 ± 1.44 mm, Y: 1.21 ± 0.88 mm, and Z: 1.12 ± 0.82 mm. Tracked pointer calibration had a distance from the ground truth of X: 0.23 ± 0.16 mm, Y: 0.62 ± 0.31 mm, and Z: 0.45 ± 0.33 mm. Distance from ground truth was significantly less (p<0.01) with the tracked pointer method in all directions. CONCLUSION: Calibration using a tracked pointer had a slightly greater variance; however it showed better accuracy over calibrations calculated with N-wire phantoms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two-dimensional ultrasound (2D US) imaging is commonly used for diagnostic and intraoperative guidance of
interventional abdominal procedures including percutaneous thermal ablation of focal liver tumors with radiofrequency (RF) or microwave (MW) induced energy. However, in many situations 2D US may not provide enough anatomical detail and guidance information. Therefore, intra-procedural CT or MR imaging are used in many centers for guidance purposes. These modalities are costly and are mainly utilized to confirm tool placement rather than guiding the insertion. Three-dimensional ultrasound (3D US) has been introduced to address these issues. In this paper, we present our integrated solution to provide 3D US images using a newly developed mechanical transducer with a large field-ofview and without the need for external tracking devices to combine diagnostic and planning information of different modalities for intraoperative guidance. The system provides tools to segment the target(s), plan the treatment, and detect the ablation applicators during the procedure for guiding purposes. We present experimental results used to ensure that our system generates accurate measurements and our early clinical evaluation results. The results suggest that 3D US used for focal liver ablation can provide a more reliable planning and guidance tool compared to 2D US only, and in many cases offers comparable measurements to other alternatives at significantly lower cost, faster time and with no harmful radiation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accurate identification of tumor margins during neurosurgery is a primary concern for the surgeon in order to maximize resection of malignant tissue while preserving normal function. The use of preoperative imaging for guidance is standard of care, but tumor margins are not always clear even when contrast agents are used, and so margins are often determined intraoperatively by visual and tactile feedback. Ultrasound strain imaging creates a quantitative representation of tissue stiffness which can be used in real-time. The information offered by strain imaging can be placed within a conventional image-guidance workflow by tracking the ultrasound probe and calibrating the image plane, which facilitates interpretation of the data by placing it within a common coordinate space with preoperative imaging. Tumor geometry in strain imaging is then directly comparable to the geometry in preoperative imaging. This paper presents a tracked ultrasound strain imaging system capable of co-registering with preoperative tomograms and also of reconstructing a 3D surface using the border of the strain lesion. In a preliminary study using four phantoms with subsurface tumors, tracked strain imaging was registered to preoperative image volumes and then tumor surfaces were reconstructed using contours extracted from strain image slices. The volumes of the phantom tumors reconstructed from tracked strain imaging were approximately between 1.5 to 2.4 cm3, which was similar to the CT volumes of 1.0 to 2.3 cm3. Future work will be done to robustly characterize the reconstruction accuracy of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser interstitial thermal therapy (LITT) has recently shown great promise as a treatment strategy for localized, focal, low-grade, organ-confined prostate cancer (CaP). Additionally, LITT is compatible with multi-parametric magnetic resonance imaging (MP-MRI) which in turn enables (1) high resolution, accurate localization of ablation zones on in vivo MP-MRI prior to LITT, and (2) real-time monitoring of temperature changes in vivo via MR thermometry during LITT. In spite of rapidly increasing interest in the use of LITT for treating low grade, focal CaP, very little is known about treatment-related changes following LITT. There is thus a clear need for studying post-LITT changes via MP-MRI and consequently to attempt to (1) quantitatively identify MP-MRI markers predictive of favorable treatment response and longer term patient outcome, and (2) identify which MP-MRI markers are most sensitive to post-LITT changes in the prostate. In this work, we present the first attempt at examining focal treatment-related changes on a per-voxel basis (high resolution) via quantitative evaluation of MR parameters pre- and post-LITT. A retrospective cohort of MP-MRI data comprising both pre- and post- LITT T2-weighted (T2w) and diffusion-weighted (DWI) acquisitions was considered, where DWI MRI yielded an Apparent Diffusion Co-efficient (ADC) map. A spatially constrained affine registration scheme was implemented to first bring T2w and ADC images into alignment within each of the pre- and post-LITT acquisitions, following which the pre- and post-LITT acquisitions were aligned. Pre- and post-LITT MR parameters (T2w intensity, ADC value) were then standardized to a uniform scale (to correct for intensity drift) and then quantified via the raw intensity values as well as via texture features derived from T2w MRI. In order to quantify imaging changes as a result of LITT, absolute differences were calculated between the normalized pre- and post-LITT MRI parameters. Quantitatively combining the ADC and T2w MRI parameters enabled construction of an integrated MP-MRI difference map that was highly indicative of changes specific to the LITT ablation zone. Preliminary quantitative comparison of the changes in different MR parameters indicated that T2w texture may be highly sensitive as well as specific in identifying changes within the ablation zone pre- and post-LITT. Visual evaluation of the differences in T2w texture features pre- and post-LITT also appeared to provide an indication of LITT-related effects such as edema. Our preliminary results thus indicate great potential for non-invasive MP-MRI imaging markers for determining focal treatment related changes, and hence long- and short-term patient outcome.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel flexure-based wrist design intended for use with needle-sized robotic manipulators. It is designed to be mounted at the tip of a traditional surgical needle, deployed through an endoscope working channel, or attached to the tip of a concentric tube robot. In all these applications, the wrist enables dexterity in small spaces. The wrist consists of two stacked flexure joints that are actuated by thin pull wires. In this paper we present the design of the wrist, its kinematics, and an experimental evaluation of the relationship between actuation force and tip displacement conducted using a scale model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vascular and microvascular anastomosis is considered to be the foundation of plastic and reconstructive surgery, hand surgery, transplant surgery, vascular surgery and cardiac surgery. In the last two decades innovative techniques, such as vascular coupling devices, thermo-reversible poloxamers and suture-less cuff have been introduced. Intra-operative surgical guidance using a surgical imaging modality that provides in-depth view and 3D imaging can improve outcome following both conventional and innovative anastomosis techniques. Optical coherence tomography (OCT) is a noninvasive high-resolution (micron level), high-speed, 3D imaging modality that has been adopted widely in biomedical and clinical applications. In this work we performed a proof-of-concept evaluation study of OCT as an assisted intraoperative and post-operative imaging modality for microvascular anastomosis of rodent femoral vessels. The OCT imaging modality provided lateral resolution of 12 μm and 3.0 μm axial resolution in air and 0.27 volume/s imaging speed, which could provide the surgeon with clearly visualized vessel lumen wall and suture needle position relative to the vessel during intraoperative imaging. Graphics processing unit (GPU) accelerated phase-resolved Doppler OCT (PRDOCT) imaging of the surgical site was performed as a post-operative evaluation of the anastomosed vessels and to visualize the blood flow and thrombus formation. This information could help surgeons improve surgical precision in this highly challenging anastomosis of rodent vessels with diameter less than 0.5 mm. Our imaging modality could not only detect accidental suture through the back wall of lumen but also promptly diagnose and predict thrombosis immediately after reperfusion. Hence, real-time OCT can assist in decision-making process intra-operatively and avoid post-operative complications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Temperature monitoring and therefore the final treatment zone achieved during a cone-beam CT (CBCT) guided ablation can prevent overtreatment and undertreatment. A novel method is proposed to detect changes in consecutive CBCT images obtained from projection reconstructions during an ablation procedure. The possibility is explored of using this method to generate thermometry maps from CBCT images, which can be used as an input function for ablation treatment planning. This novel method uses a baseline and an intermittent CBCT scan, which are routinely acquired to confirm the needle position and monitor progress of the ablation. Accurate registration is required and assumed in vitro and ex vivo. A Wronskian change detector algorithm is applied on the compensated images to obtain a difference image between the intermittent and baseline scans. Finally, a thermal map created by applying a calibration determined experimentally is used to obtain the corresponding temperature at each pixel or voxel. We applied Wronskian change detector to detect the difference of two CBCT images, which have low signal to noise ratio, and calibrate Wronskian change model to temperature data using a gel phantom. We tested the temperature mapping with water and gel phantoms as well as pig shoulder. The experimental results show this method can detect temperature change within 5°C for a voxel size of 1mm3 (within clinical relevancy), and by consequence delineate the ablation zone. The preliminary experimental results show that CBCT thermometry is possible and promising, but may require pre-processing, such as registration for motion compensation between the baseline and intermittent scans. Further, quantitative evaluations have to be conducted for validation prior to clinical assessment and translation. CBCT is a widely available technology that could make thermometry clinically practical as an enabling component of iterative ablation treatment planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In noninvasive high intensity focused ultrasound (HIFU) treatment, we often need to register MR images acquired with different patient positioning or at different respiratory instances. In these scenarios, the abdominal organs such as the liver exhibit a large motion in different images. In our previous work, we proposed a fast neuro-fuzzy technique for deformable registration with small motion. In this study, based on elastic solid mechanics, we extend our previous results to deformation with large motion which is often the case for soft tissues in HIFU treatment. The proposed method involves minimizing strain energy of soft tissues which is constrained by 3D curves of blood vessels and point marks. It provides fast and robust deformable match for internal structures such as blood vessels, and eliminates local minima. Furthermore, the strain energy constraint provides good generalization properties, prevents the issue of overfitting, and leads to physically consistent deformable registration results. We have demonstrated the effectiveness of our deformable technique in registering MR liver images. Validation shows a target registration error of 2.31 mm and an average centerline distance error of 2.30 mm. This technique has the potential to significantly improve the registration capability and the quality of intra-operative image guidance in HIFU procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inspection of the urinary bladder with an endoscope (cystoscope) is the usual procedure for early detection of bladder cancer. The very limited field of view provided by the endoscope makes it challenging to ensure, that the interior bladder wall has been examined completely. Panorama imaging techniques can be used to assist the surgeon and provide a larger view field. Different approaches have been proposed, but generating a panorama image of the entire bladder from real patient data is still a challenging research topic. We propose a graph-based and hierarchical approach to assess this problem to first generate several local panorama images, followed by a global textured three-dimensional reconstruction of the organ. In this contribution, we address details of the first level of the approach including a graph-based algorithm to deal with the challenging condition of in-vivo data. This graph strategy gives rise to a robust relocalization strategy in case of tracking failure, an effective keyframe
selection process as well as the concept of building locally optimized sub-maps, which lay the ground for a global optimization process. Our results show the successful application of the method to four in-vivo data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A framework has been investigated to enable a variety of comparative studies in the context of needle–based gynaecological brachytherapy. Our aim was to create an anthropomorphic phantom–based platform. The three main elements of the platform are the organ model, needle guide, and needle drive. These have been studied and designed to replicate the close environment of brachytherapy treatment for cervical cancer. Key features were created with the help of collaborating interventional radio–oncologists and the observations made in the operating room. A phantom box, representing the uterus model, has been developed considering available surgical analogies and operational limitations, such as organs at risk. A modular phantom–based platform has been designed and prototyped with the capability of providing various boundary conditions for the target organ. By mimicking the female pelvic floor, this framework has been used to compare a variety of needle insertion techniques and configurations for cervical and uterine interventions. The results showed that the proposed methodology is useful for the investigation of quantifiable experiments in the intraabdominal and pelvic regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laparoscopic surgery is a minimally invasive surgical approach, in which abdominal surgical procedures are performed through trocars via small incisions. Patients benefit by reduced postoperative pain, shortened hospital stays, improved cosmetic results, and faster recovery times. Optimal port placement can improve surgeon dexterity and avoid the need to move the trocars, which would cause unnecessary trauma to the patient. We are building an intuitive open source visualization system to help surgeons identify ports. Our methodology is based on an intuitive port placement visualization module and atlas-based registration algorithm to transfer port locations to individual patients. The methodology follows three steps:1) Use a port placement visualization module to manually place ports in an abdominal organ atlas. This step generates port-augmented abdominal atlas. This is done only once for a given patient population. 2) Register the atlas data with the patient CT data, to transfer the prescribed ports to the individual patient 3) Review and adjust the transferred port locations using the port placement visualization module. Tool maneuverability and target reachability can be tested using the visualization system. Our methodology would decrease the amount of physician input necessary to optimize port placement for each patient case. In a follow up
work, we plan to use the transferred ports as starting point for further optimization of the port locations by formulating a cost function that will take into account factors such as tool dexterity and likelihood of collision between instruments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new method for patient-specific liver deformation modeling for tumor tracking. Our method focuses on deforming two main blood vessels of the liver – hepatic and portal vein – to utilize them as features. A novel centerline editing algorithm based on ellipse fitting is introduced for vessel deformation. Centerline-based blood vessel model and various interpolation methods are often used for generating a deformed model at the specific time t. However, it may introduce artifacts when models used in interpolation are not consistent. One of main reason of this inconsistency is the location of bifurcation points differs from each image. To solve this problem, our method generates a base model from one of patient’s CT images. Next, we apply a rigid iterative closest point (ICP) method to the base model with centerlines of other images. Because the transformation is rigid, the length of each vessel’s centerline is preserved while some part of the centerline is slightly deviated from centerlines
of other images. We resolve this mismatch using our centerline editing algorithm. Finally, we interpolate three deformed models of liver, blood vessels, tumor using quadratic B´ezier curves. We demonstrate the effectiveness of the proposed approach with the real patient data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor’s 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tumor tracking is very important to deal with a cancer in a moving organ in clinical applications such as radiotherapy, HIFU etc. Respiratory monitoring systems are widely used to find location of the cancers in the organs because respiratory signal is highly correlated with the movement of organs such as the lungs and liver. However the
conventional respiratory system doesn’t have enough accuracy to track the location of a tumor as well as they need additional effort or devices to use. In this paper, we propose a novel method to track a liver tumor in real time by extracting respiratory signals directly from B-mode images and using a deformed liver model generated from CT images of the patient. Our method has several advantages. 1) There is no additional radiation dose and is cost effective due to use of an ultrasound device. 2) A high quality respiratory signal can be directly extracted from 2D images of the diaphragm. 3) Using a deformed liver model to track a tumor’s 3D position, our method has an accuracy of 3.79mm in tracking error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current pharmacological therapies for the treatment of chronic optic neuropathies such as glaucoma are often inadequate due to their inability to directly affect the optic nerve and prevent neuron death. While drugs that target the neurons have been developed, existing methods of administration are not capable of delivering an effective dose of medication along the entire length of the nerve. We have developed an image-guided system that utilizes a magnetically tracked flexible endoscope to navigate to the back of the eye and administer therapy directly to the optic nerve. We demonstrate the capabilities of this system with a series of targeted surgical interventions in the orbits of live pigs. Target objects consisted of NMR microspherical bulbs with a volume of 18 μL filled with either water or diluted gadolinium-based contrast, and prepared with either the presence or absence of a visible coloring agent. A total of 6 pigs were placed under general anesthesia and two microspheres of differing color and contrast content were blindly implanted in the fat tissue of each orbit. The pigs were scanned with T1-weighted MRI, image volumes were registered, and the microsphere containing gadolinium contrast was designated as the target. The surgeon was required to navigate the flexible endoscope to the target and identify it by color. For the last three pigs, a 2D/3D registration was performed such that the target's coordinates in the image volume was noted and its location on the video stream was displayed with a crosshair to aid in navigation. The surgeon was able to correctly identify the target by color, with an average intervention time of 20 minutes for the first three pigs and 3 minutes for the last three.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is widely believed that major factors in achieving atraumatic insertion of the electrode array into the cochlea in cochlear implant (CI) surgery include amount of tissue resection, selection of the entry point, and angle of insertion. Our group is interested in developing an image guidance (IG) system for electrode insertion if IG can improve outcomes. Thus, in this work we conducted the first study evaluating whether IG could aid atraumatic electrode insertion. To do this, we measured the performance of experienced surgeons when tasked to perform cochleostomy resection and to select CI insertion trajectories in virtual 3D surgical field-of-view simulation software. This software, which simulates views through the surgical microscope, was designed to allow a user to manually perform cochleostomy resection and to select a preferred insertion trajectory in one of two modes: (a) where the traditional approach is simulated and sub-surface anatomy is not visible; and (b) where an IG approach is simulated and the surgical view is augmented with rendering of subsurface intra-cochlear structures. We used this software to compare two surgeons’ performance in selecting insertion trajectories with and without IG. Our results show that when using virtual IG, both surgeons could choose insertion trajectories with less variability, select higher quality insertion trajectories, and create the cochleostomy with substantially less tissue resection. These results suggest that IG could indeed aid performance of atraumatic cochlear implantation techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereotactic frames are a standard tool for neurosurgical targeting, but are uncomfortable for patients and obstruct the surgical field. Microstereotactic frames are more comfortable for patients, provide better access to the surgical site, and have grown in popularity as an alternative to traditional stereotactic devices. However, clinically available microstereotactic frames require either lengthy manufacturing delays or expensive image guidance systems. We introduce a robotically-adjusted, disposable microstereotactic frame for deep brain stimulation surgery that eliminates the drawbacks of existing microstereotactic frames. Our frame can be automatically adjusted in the operating room using a preoperative plan in less than five minutes. A validation study on phantoms shows that our approach provides a target positioning error of 0.14 mm, which exceeds the required accuracy for deep brain stimulation surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
C-arm fluoroscopy is used for guidance during several clinical exams, e.g. in bronchoscopy to locate the bronchoscope inside the airways. Unfortunately, these images provide only 2D information. However, if the C-arm pose is known, it can be used to overlay the intrainterventional fluoroscopy images with 3D visualizations of airways, acquired from preinterventional CT images. Thus, the physician's view is enhanced and localization of the instrument at the correct position inside the bronchial tree is facilitated. We present a novel method for C-arm pose estimation introducing a marker-based pattern, which is placed on the patient table. The steel markers form a pattern, allowing to deduce the C-arm pose by use of the projective invariant cross-ratio. Simulations show that the C-arm pose estimation is reliable and accurate for translations inside an imaging area of 30 cm x 50 cm and rotations up to 30°. Mean error values are 0.33 mm in 3D space and 0.48 px in the 2D imaging plane. First tests on C-arm images resulted in similarly compelling accuracy values and high reliability in an imaging area of 30 cm x 42.5 cm. Even in the presence of interfering structures, tested both with anatomy phantoms and a turkey cadaver, high success rates over 90% and fully satisfying execution times below 4 sec for 1024 px × 1024 px images could be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In trauma and orthopedic surgery screw assessment and trajectory prediction using two-dimensional X-ray images is very difficult due to projected 3D information. However screw assessment can be done with multiple X-ray images. If the X-ray image contains the projected implant geometry it can be used as global coordinate reference. Thereby multiple independent X-ray images can be synchronized by estimating the implant pose in each single image. Consequently high accuracy pose estimation is fundamental. To measure the outcome quality an evaluation process has been designed. The evaluation process investigates in its first step several clinical intra-operative anterior-posterior (AP) and medio-lateral (ML) X-ray images which have been analyzed using a manual pose estimation method. With the manual method the six 3D parameters of the implant pose are estimated. These parameters define as well the camera pose relative to the implant. Based on the pose parameters of all clinical cases the capturing range for typical AP and ML images is statistically defined. The implant was attached to a phantom with 16 steel balls which allows to calculate the ground truth pose. Afterwards several X-ray images of the phantom are taken within the statistically defined capturing range. With the known ground truth different pose estimation methods can be compared. For each method the estimation quality can be calculated. In addition this error calculation can be used to adjust the initial manually determined capturing range. This paper explains the error evaluation process and describes how to validate pose estimation methods for clinical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intraoperative imaging could improve patient safety and quality assurance (QA) via the detection of subtle complications that might otherwise only be found hours after surgery. Such capability could therefore reduce morbidity and the need for additional intervention. Among the severe adverse events that could be more quickly detected by high-quality intraoperative imaging is acute intracranial hemorrhage (ICH), conventionally assessed using post-operative CT. A mobile C-arm capable of high-quality cone-beam CT (CBCT) in combination with advanced image reconstruction techniques is reported as a means of detecting ICH in the operating room. The system employs an isocentric C-arm with a flat-panel detector in dual gain mode, correction of x-ray scatter and beam-hardening, and a penalized likelihood (PL) iterative reconstruction method. Performance in ICH detection was investigated using a quantitative phantom focusing on (non-contrast-enhanced) blood-brain contrast, an anthropomorphic head phantom, and a porcine model with injection of fresh blood bolus. The visibility of ICH was characterized in terms of contrast-to-noise ratio (CNR) and qualitative evaluation of images by a neurosurgeon. Across a range of size and contrast of the ICH as well as radiation dose from the CBCT scan, the CNR was found to increase from ~2.2-3.7 for conventional filtered backprojection (FBP) to ~3.9-5.4 for PL at equivalent spatial resolution. The porcine model demonstrated superior ICH detectability for PL. The results support the role of high-quality mobile C-arm CBCT employing advanced reconstruction algorithms for detecting subtle complications in the operating room at lower radiation dose and lower cost than intraoperative CT scanners and/or fixedroom C-arms. Such capability could present a potentially valuable aid to patient safety and QA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser-induced interstitial thermal therapy (LITT) has recently emerged as a new, less invasive alternative to craniotomy for treating epilepsy; which allows for focussed delivery of laser energy monitored in real time by MRI, for precise removal of the epileptogenic foci. Despite being minimally invasive, the effects of laser ablation on the epileptogenic foci (reflected by changes in MR imaging markers post-LITT) are currently unknown. In this work, we present a quantitative framework for evaluating LITT-related changes by quantifying per-voxel changes in MR imaging markers which may be more reflective of local treatment related changes (TRC) that occur post-LITT, as compared to the standard volumetric analysis which involves monitoring a more global volume change across pre-, and post-LITT MRI. Our framework focuses on three objectives: (a) development of temporal MRI signatures that characterize TRC corresponding to patients with seizure freedom by comparing differences in MR imaging markers and monitoring them over time, (b) identification of the optimal time point when early LITT induced effects (such as edema and mass effect) subside by monitoring TRC at subsequent time-points post-LITT, and (c) identification of contributions of individual MRI protocols towards characterizing LITT-TRC for epilepsy by identifying MR markers that change most dramatically over time and employ individual contributions to create a more optimal weighted MP-MRI temporal profile that can better characterize TRC compared to any individual imaging marker. A cohort of patients were monitored at different time points post-LITT via MP-MRI involving T1-w, T2-w, T2-GRE, T2-FLAIR, and apparent diffusion coefficient (ADC) protocols. Post affine registration of individual MRI protocols to a reference MRI protocol pre-LITT, differences in individual MR markers are computed on a per-voxel basis, at different time-points with respect to baseline (pre-LITT) MRI as well as across subsequent time-points. A time-dependent MRI profile corresponding to successful (seizure-free) is then created that captures changes in individual MR imaging markers over time. Our preliminary analysis on two patient studies suggests that (a) LITT related changes (attributed to swelling and edema) appear to subside within 4-weeks post-LITT, (b) ADC may be more sensitive for evaluating early TRC (up to 3-months), and T1-w may be more sensitive in evaluating early delayed TRC (1-month, 3-months), while T2-w and T2-FLAIR appeared to be more sensitive in identifying late TRC (around 6-months post-LITT) compared to the other MRI protocols under evaluation. T2-GRE was found to be only nominally sensitive in identifying TRC at any follow-up time-point post-LITT. The framework presented in this work thus serves as an important precursor to a comprehensive treatment evaluation framework that can be used to identify sensitive MR markers corresponding to patient response (seizure-freedom or seizure recurrence), with an ultimate objective of making prognostic predictions about patient outcome post-LITT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Micro-beam radiation therapy (MRT) uses parallel planes of high dose narrow (10-100 um in width) radiation beams separated by a fraction of a millimeter to treat cancerous tumors. This experimental therapy method based on synchrotron radiation has been shown to spare normal tissue at up to 1000Gy of entrance dose while still being effective in tumor eradication and extending the lifetime of tumor-bearing small animal models. Motion during the treatment can result in significant movement of micro beam positions resulting in broader beam width and lower peak to valley dose ratio (PVDR), and thus can reduce the effectiveness of the MRT. Recently we have developed the first bench-top image guided MRT system for small animal treatment using a high powered carbon nanotube (CNT) x-ray source array. The CNT field emission x-ray source can be electronically synchronized to an external triggering signal to enable physiologically gated firing of x-ray radiation to minimize motion blurring. Here we report the results of phantom study of respiratory gated MRT. A simulation of mouse breathing was performed using a servo motor. Preliminary results show that without gating the micro beam full width at tenth maximum (FWTM) can increase by 70% and PVDR can decrease up to 50%. But with proper gating, both the beam width and PVDR changes can be negligible. Future experiments will involve irradiation of mouse models and comparing histology stains between the controls and the gated irradiation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for the construction of a patient-specific model of a scoliotic torso for surgical planning via inter-patient registration is presented. Magnetic Resonance Images (MRI) of a generic model are registered to surface topography (TP) and X-ray data of a test patient. A partial model is first obtained via thin-plate spline registration between TP and X-ray data of the test patient. The MRIs from the generic model are then fit into the test patient using articulated model registration between the vertebrae of the generic model's MRIs in prone position and the test patient's X-rays in standing position. A non-rigid deformation of the soft tissues is performed using a modified thin-plate spline constrained to maintain bone rigidity and to fit in the space between the vertebrae and the surface of the torso. Results show average Dice values of 0:975 ± 0:012 between the MRIs following inter-patient registration and the surface topography of the test patient, which is comparable to the average value of 0:976 ± 0:009 previously obtained following intra-patient registration. The results also show a significant improvement compared to rigid inter-patient registration. Future work includes validating the method on a larger cohort of patients and incorporating soft tissue stiffness constraints. The method developed can be used to obtain a geometric model of a patient including bone structures, soft tissues and the surface of the torso which can be incorporated in a surgical simulator in order to better predict the outcome of scoliosis surgery, even if MRI data cannot be acquired for the patient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In SPECT imaging, motion from patient respiration and body motion can introduce image artifacts that may reduce the diagnostic quality of the images. Simulation studies using numerical phantoms with precisely known motion can help to develop and evaluate motion correction algorithms. Previous methods for evaluating motion correction algorithms used either manual or semi-automated segmentation of MRI studies to produce patient models in the form of XCAT Phantoms, from which one calculates the transformation and deformation between MRI study and patient model. Both manual and semi-automated methods of XCAT Phantom generation require expertise in human anatomy, with the semiautomated method requiring up to 30 minutes and the manual method requiring up to eight hours. Although faster than manual segmentation, the semi-automated method still requires a significant amount of time, is not replicable, and is subject to errors due to the difficulty of aligning and deforming anatomical shapes in 3D. We propose a new method for matching patient models to MRI that extends the previous semi-automated method by eliminating the manual non-rigid transformation. Our method requires no user supervision and therefore does not require expert knowledge of human anatomy to align the NURBs to anatomical structures in the MR image. Our contribution is employing the SIMRI MRI simulator to convert the XCAT NURBs to a voxel-based representation that is amenable to automatic non-rigid registration. Then registration is used to transform and deform the NURBs to match the anatomy in the MR image. We show that our automated method generates XCAT Phantoms more robustly and significantly faster than the previous semi-automated method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel technique is proposed to characterize lung tissue incompressibility variation during respiration. Lung tissue incompressibility variation stems from significant air content variation in the tissue throughout respiration. Estimating lung tissue incompressibility and its variation is critical for computer assisted tumor motion tracking. Continuous tumor motion during respiration is a major challenge in lung cancer treatment by external beam radiotherapy. If not accounted for, this motion leads to areas of radiation over dosage for the lung normal tissues. Since no effective imaging modality is available for real-time lung tumor tracking, computer based modeling which has the capability for accurate tissue deformation estimation can be a good alternative. Lung tissue deformation estimation can be made using the lung Finite Element (FE) model where its accuracy depends on input tissue biomechanical properties including incompressibility parameter. In this research, an optimization algorithm is proposed to estimate the incompressibility parameter function in terms of respiration cycle time. In this algorithm, the incompressibility parameter and lung pressure values are varied systematically until optimal values, which result in maximum similarity between acquired and simulated 4D CT images of the lung, are achieved for each respiration time point. The simulated images are constructed using a reference image in conjunction with the deformation field obtained from the lung’s FE model in each respiration time increment. We demonstrated that utilizing the calculated function along with respiratory system FE modeling leads to accurate tumor targeting, hence potentially improving lung radiotherapy outcome.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Purpose: Electromagnetic (EM)-tracked ultrasound (US)-guided needle navigation systems have potential use in spinal interventions; however, an assessment of the accuracy of these systems is required. Analysis of these systems involves examining the overall error of the system and the error of its components. The purpose of this study is to estimate the error components in an EM-tracked US-guided needle navigation system, and to determine the relationships between them, specifically for evaluation of US probe calibration. Methods: The main parts of the experimental setup are the US probe, the tracker, and the needle. The system error is examined by imaging the tracked needle with the US probe. The positional tracking error is tested for multiple needle, probe and reference sensors using a 7×9 grid with 4 cm spacing between points. Needle calibration error is evaluated by pivot calibration. An upper bound for the probe calibration error is then estimated using a series of transformations between the tracker and the needle tip position. Results: For all experiments, the mean error and its standard deviation increase as a function of distance from the tracker. The upper-bound of the US probe calibration error is estimated to be 1.81 mm. Conclusion: Operating distance has significant impact on component error, and the optimal operating distance for the presented setup has been shown. Although US probe calibration error cannot be measured directly, its upper-bound has been estimated by assessing the errors in other components of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using computational models, images acquired pre-operatively can be updated to account for intraoperative brain shift in image-guided surgical (IGS) systems. An optically tracked textured laser range scanner (tLRS) furnishes the 3D
coordinates of cortical surface points (3D point clouds) over the surgical field of view and provides a correspondence
between these and the pre-operative MR image. However, integration of the acquired tLRS data into a clinically
acceptable system compatible throughout the clinical workflow of tumor resection has been challenging. This is because acquiring the tLRS data requires moving the scanner in and out of the surgical field, thus limiting the number of acquisitions. Large differences between acquisitions caused by tumor resection and tissue manipulation make it difficult to establish correspondence and estimate brain motion. An alternative to the tLRS is to use temporally dense feature-rich stereo surgical video data provided by the operating microscope. This allows for quick digitization of the cortical surface in 3D and can help continuously update the IGS system. In order to understand the tradeoffs between these approaches as input to an IGS system, we compare the accuracy of the 3D point clouds extracted from the stereo video system of the surgical microscope and the tLRS for phantom objects in this paper. We show that the stereovision system of the surgical microscope achieves accuracy in the 0.46-1.5mm range on our phantom objects and is a viable alternative to the tLRS for neurosurgical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker
based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of
these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study performs 3D to 2D rigid registration of segmented pre-operative CTA coronary arteries with a single segmented intra-operative X-ray Angio frame in both frequency and spatial domains for real-time Angiography interventions by C-arm fluoroscopy. Most of the work on rigid registration in literature required a close initial-
ization of poses and/or positions because of the abundance of local minima and high complexity that searching algorithms face. This study avoids such setbacks by transforming the projections into translation-invariant Fourier domain for estimating the 3D pose. First, template DRRs as candidate poses of 3D vessels of segmented CTA are produced by rotating the camera (image intensifier) around the DICOM angle values with a wide range as in C-arm setup. We have compared the 3D poses of template DRRs with the real X-ray after equalizing the scales (due to disparities in focal length distances) in 3 domains, namely Fourier magnitude, Fourier phase and Fourier polar. The best pose candidate was chosen by one of the highest similarity measures returned by the methods in these domains. It has been noted in literature that these methods are robust against noise and occlusion which was also validated by our results. Translation of the volume was then recovered by distance-map based BFGS optimization well suited to convex structure of our objective function without local minima due to distance maps. Final results were evaluated in 2D projection space rather than with actual values in 3D due to lack of ground truth, ill-posedness of the problem which we intend to address in future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
All 2D/3D anatomy based rigid registration algorithms are iterative, requiring an initial estimate of the 3D data pose. Current initialization methods have limited applicability in the operating room setting, due to the constraints imposed by this environment or due to insufficient accuracy. In this work we use the Microsoft Kinect device to allow the surgeon to interactively initialize the registration process. A Kinect sensor is used to simulate the mouse-based operations in a conventional manual initialization approach, obviating the need for physical contact with an input device. Different gestures from both arms are detected from the sensor in order to set or switch the required working contexts. 3D hand motion provides the six degree-of-freedom controls for manipulating the pre-operative data in the 3D space. We evaluated our method for both X-ray/CT and X-ray/MR initialization using three publicly available reference data sets. Results show that, with initial target registration errors of 117:7 ± 28:9 mm a user is able to achieve final errors of 5:9 ± 2:6 mm within 158 ± 65 sec using the Kinect-based approach, compared to 4:8±2:0 mm and 88±60 sec when using the mouse for interaction. Based on these results we conclude that this method is sufficiently accurate for initialization of X-ray/CT and X-ray/MR registration in the OR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical imaging is an essential component of a wide range of surgical procedures1. For image guided surgical (IGS) procedures, medical images are the main source of information2. The IGS procedures rely largely on obtained image data, so the data needs to provide differentiation between normal and abnormal tissues, especially when other surgical guidance devices are used in the procedures. The image data also needs to provide accurate spatial representation of the patient3. This research has concentrated on the concept of accuracy assessment of IGS devices to meet the needs of quality assurance in the hospital environment. For this purpose, two precision engineered accuracy assessment phantoms have been developed as advanced materials and methods for the community. The phantoms were designed to mimic the volume of a human head as the common region of surgical interest (ROSI). This paper introduces the utilization of the phantoms in spatial accuracy assessment of a commercial surgical 3D CT scanner, the O-Arm. The study presents methods and results of image quality detection of possible geometrical distortions in the region of surgical interest. The results show that in the pre-determined ROSI there are clear image distortion and artefacts using too high imaging parameters when scanning the objects. On the other hand, when using optimal parameters, the O-Arm causes minimal error in IGS accuracy. The detected spatial inaccuracy of the O-Arm with used parameters was in the range of less than 1.00 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast, accurate, deformable image registration is an important aspect of image-guided interventions. Among the factors that can confound registration is the presence of additional material in the intraoperative image - e.g., contrast bolus or a surgical implant - that was not present in the prior image. Existing deformable registration methods generally fail to account for tissue excised between image acquisitions and typically simply “move” voxels within the images with no ability to account for tissue that is removed or introduced between scans. We present a variant of the Demons algorithm to accommodate such content mismatch. The approach combines segmentation of mismatched content with deformable registration featuring an extra pseudo-spatial dimension representing a reservoir from which material can be drawn into the registered image. Previous work tested the registration method in the presence of tissue excision (“missing tissue”). The current paper tests the method in the presence of additional material in the target image and presents a general method by which either missing or additional material can be accommodated. The method was tested in phantom studies, simulations, and cadaver models in the context of intraoperative cone-beam CT with three examples of content mismatch: a variable-diameter bolus (contrast injection); surgical device (rod), and additional material (bone cement). Registration accuracy was assessed in terms of difference images and normalized cross correlation (NCC). We identify the difficulties that traditional registration algorithms encounter when faced with content mismatch and evaluate the ability of the proposed method to overcome these challenges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To make Quantitative Radiology a reality in routine radiological practice, computerized automatic anatomy recognition (AAR) becomes essential. Previously, we presented a fuzzy object modeling strategy for AAR. This paper presents several advances in this project including streamlined definition of open-ended anatomic objects, extension to multiple imaging modalities, and demonstration of the same AAR approach on multiple body regions. The AAR approach consists of the following steps: (a) Collecting image data for each population group G and body region B. (b) Delineating in these images the objects in B to be modeled. (c) Building Fuzzy Object Models (FOMs) for B. (d) Recognizing individual objects in a given image of B by using the models. (e) Delineating the recognized objects. (f) Implementing the computationally intensive steps in a graphics processing unit (GPU). Image data are collected for B and G from our existing patient image database. Fuzzy models for the individual objects are built and assembled into a model of B as per a chosen hierarchy of the objects in B. A global recognition strategy is used to determine the pose of the objects within a given image I following the hierarchy. The recognized pose is utilized to delineate the objects, also hierarchically. Based on three body regions tested utilizing both computed tomography (CT) and magnetic resonance (MR) imagery, recognition accuracy for non-sparse objects has been found to be generally sufficient ( 3 to 11 mm or 2-3 voxels) to yield delineation false positive (FP) and true positive (TP) values of < 5% and ≥ 90%, respectively. The sparse objects require further work to improve their recognition accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In radiologic clinical practice, the analysis underlying image examinations are qualitative, descriptive, and to some extent subjective. Quantitative radiology (QR) is valuable in clinical radiology. Computerized automatic anatomy recognition (AAR) is an essential step toward that goal. AAR is a body-wide organ recognition strategy. The AAR framework is based on fuzzy object models (FOMs) wherein the models for the different objects are encoded in a hierarchy. We investigated ways of optimally designing the hierarchy tree while building the models. The hierarchy among the objects is a core concept of AAR. The parent-offspring relationships have two main purposes in this context: (i) to bring into AAR more understanding and knowledge about the form, geography, and relationships among objects, and (ii) to foster guidance to object recognition and object delineation. In this approach, the relationship among objects is represented by a graph, where the vertices are the objects (organs) and the edges connect all pairs of vertices into a complete graph. Each pair of objects is assigned a weight described by the spatial distance between them, their intensity profile differences, and their correlation in size, all estimated over a population. The optimal hierarchy tree is obtained by the shortest-path algorithm as an optimal spanning tree. To evaluate the optimal hierarchies, we have performed some preliminary tests involving the subsequent recognition step. The body region used for initial investigation was the thorax.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intracardiac echocardiography (ICE), a technique in which structures of the heart are imaged using a catheter navigated inside the cardiac chambers, is an important imaging technique for guidance in cardiac ablation therapy. Automatic segmentation of these images is valuable for guidance and targeting of treatment sites. In this paper, we describe an approach to segment ICE images by generating an empirical model of blood pool and tissue intensities. Normal, Weibull, Gamma, and Generalized Extreme Value (GEV) distributions are fit to histograms of tissue and blood pool pixels from a series of ICE scans. A total of 40 images from 4 separate studies were evaluated. The model was trained and tested using two approaches. In the first approach, the model was trained on all images from 3 studies and subsequently tested on the 40 images from the 4th study. This procedure was repeated 4 times using a leave-one-out strategy. This is termed the between-subjects approach. In the second approach, the model was trained on 10 randomly selected images from a single study and tested on the remaining 30 images in that study. This is termed the within-subjects approach. For both approaches, the model was used to automatically segment ICE images into blood and tissue regions. Each pixel is classified using the Generalized Liklihood Ratio Test across neighborhood sizes ranging from 1 to 49. Automatic segmentation results were compared against manual segmentations for all images. In the between-subjects approach, the GEV distribution using a neighborhood size of 17 was found to be the most accurate with a misclassification rate of approximately 17%. In the within-subjects approach, the GEV distribution using a neighborhood size of 19 was found to be the most accurate with a misclassification rate of approximately 15%. As expected, the majority of misclassified pixels were located near the boundaries between tissue and blood pool regions for both methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probe or needle artifact detection in 3D scans gives an approximate location for the tools inserted, and is thus crucial in assisting many image-guided procedures. Conventional needle localization algorithms often start with cropped images, where unwanted parts of raw scans are cropped either manually or by applying pre-defined masks. In cryoablation, however, the number of probes used, the placement and direction of probe insertion, and the portions of abdomen scanned differs significantly from case to case, and probes are often constantly being adjusted during the Probe Placement Phase. These features greatly reduce the practicality of approaches based on image cropping. In this work, we present a fully Automatic Probe Artifact Detection method, APAD, that works directly on uncropped raw MRI images, taken during the Probe Placement Phase in 3Tesla MRI-guided cryoablation. The key idea of our method is to first locate an initial 2D line strip within a slice of the MR image which approximates the position and direction of the 3D probes bundle, noting that cryoprobes or biopsy needles create a signal void (black) artifact in MRI with a bright cylindrical border. With the initial 2D line, standard approaches to detect line structures such as the 3D Hough Transform can be applied to quickly detect each probe’s axis. By comparing with manually labeled probes, the analysis of 5 patient treatment cases of kidney cryoablation with varying probe placements demonstrates that our algorithm combined with standard 3D line detection is an accurate and robust method to detect probe artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Minimally invasive interventions often involve tools of curvilinear shape like catheters and guide-wires. If the camera parameters of a fluoroscopic system or a stereoscopic endoscope are known, a 3-D reconstruction of corresponding points can be computed by triangulation. Manual identification of point correspondences is time consuming, but there exist methods that automatically select corresponding points along curvilinear structures. The focus here is on the evaluation of a recent published method for catheter reconstruction from two views. A previous evaluation of this method using clinical data yielded promising results. For that evaluation, however, no 3-D ground truth data was available such that the error could only be estimated using the forward-projection of the reconstruction. In this paper, we present a more extensive evaluation of this method based on both clinical and phantom data. For the evaluation using clinical images, 36 data sets and two different catheters were available. The mean error found when reconstructing both catheters was 0.1mm ± 0.1mm. To evaluate the error in 3-D, images of a phantom were acquired from 13 different angulations. For the phantom, A 3D C-arm CT voxel data set of the phantom was also available. A reconstruction error was calculated by comparing the triangulated 3D reconstruction result to the 3D voxel data set. The evaluation yielded an average error of 1.2mm ± 1.2mm for the circumferential mapping catheter and 1.3mm ± 1.0mm for the ablation catheter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Congenital heart disease occurs in 107.6 out of 10,000 live births, with Atrial Septal Defects (ASD) accounting for 10% of these conditions. Historically, ASDs were treated with open heart surgery using cardiopulmonary bypass, allowing a patch to be sewn over the defect. In 1976, King et al. demonstrated use of a transcatheter occlusion procedure, thus reducing the invasiveness of ASD repair. Localization during these catheter based procedures traditionally has relied on bi-plane fluoroscopy; more recently trans-esophageal echocardiography (TEE) and intra-cardiac echocardiography (ICE) have been used to navigate these procedures. Although there is a high success rate using the transcatheter occlusion procedure, fluoroscopy poses radiation dose risk to both patient and clinician. The impact of this dose to the patients is important as many of those undergoing this procedure are children, who have an increased risk associated with radiation exposure. Their longer life expectancy than adults provides a larger window of opportunity for expressing the damaging effects of ionizing radiation. In addition, epidemiologic studies of exposed populations have demonstrated that children are considerably more sensitive to the carcinogenic effects radiation. Image-guided surgery (IGS) uses pre-operative and intra-operative images to guide surgery or an interventional procedure. Central to every IGS system is a software application capable of processing and displaying patient images, registration between multiple coordinate systems, and interfacing with a tool tracking system. We have developed a novel image-guided surgery framework called Kit for Navigation by Image Focused Exploration (KNIFE). In this work we assess the efficacy of this image-guided navigation system for ASD repair using a series of mock clinical experiments designed to simulate ASD repair device deployment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early detection of prostate cancer is critical in maximizing the probability of successful treatment. Current systematic biopsy approach takes 12 or more randomly distributed core tissue samples within the prostate and can have a high potential, especially with early disease, for a false negative diagnosis. The purpose of this study is to determine the accuracy of a 3D ultrasound-guided biopsy system. Testing was conducted on prostate phantoms created from an agar mixture which had embedded markers. The phantoms were scanned and the 3D ultrasound system was used to direct the biopsy. Each phantom was analyzed with a CT scan to obtain needle deflection measurements. The deflection experienced throughout the biopsy process was dependent on the depth of the biopsy target. The results for markers at a depth of less than 20 mm, 20-30 mm, and greater than 30 mm were 3.3 mm, 4.7 mm, and 6.2 mm, respectively. This measurement encapsulates the entire biopsy process, from the scanning of the phantom to the firing of the biopsy needle. Increased depth of the biopsy target caused a greater deflection from the intended path in most cases which was due to an angular incidence of the biopsy needle. Although some deflection was present, this system exhibits a clear advantage in the targeted biopsy of prostate cancer and has the potential to reduce the number of false negative biopsies for large lesions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the introduction of 3D US image devices the demand for accurate and fast 3D calibration methods arose. We implemented three different calibration methods and compared the calibration results in terms of fiducial registration error (FRE) and target registration error (TRE). The three calibration methods included a multi-points phantom (MP), a feature based model (FM) and a membrane model (MM). With respect to the sphere method a simple point-to-point registration was applied. For the feature based model we employed a phantom consisting of spheres, pyramids and cones. These objects were imaged from different angles and a 3D3D registration was applied for all possible image combinations. The last method was accomplished by imaging a simple membrane which allows for calculation of the calibration matrix. For a first evaluation we computed the FRE for each method. To assess the calibration success on real patient data we used ten 3D3D registrations between images from the prostate. The FRE for the sphere method amounted to 1.40 mm, for the figure method to 1.05 mm and with respect to the membrane method to 1.12 mm. The deviation arising from ten 3D3D patient registration were 3.44 mm (MP), 2.93 mm (FM)and 2.84 mm (MM). The MM revealed to be the most accurate of the evaluated procedure while the MP has shown significant higher errors. The results from FM were close to the one from MM and also significant better than the one with the SM. Between FM and MM no significant difference was to detect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work extends the multi-histogram volume rendering framework proposed by Kniss et al. [1] to provide rendering results based on the impression of overlaid triangles on a graph of image intensity versus gradient magnitude. The developed method of volume rendering allows for greater emphasis to boundary visualization while avoiding issues common in medical image acquisition. For example, partial voluming effects in computed tomography and intensity inhomogeneity of similar tissue types in magnetic resonance imaging introduce pixel values that will not reflect differing tissue types when a standard transfer function is applied to an intensity histogram. This new framework uses developing technology to improve upon the Kniss multi-histogram framework by using Java, the GPU, and MIPAV, an open-source medical image processing application, to allow multi-histogram techniques to be widely disseminated. The OpenGL view aligned texture rendering approach suffered from performance setbacks, inaccessibility, and usability problems. Rendering results can now be interactively compared with other rendering frameworks, surfaces can now be extracted for use in other programs, and file formats that are widely used in the field of biomedical imaging can be visualized using this multi-histogram approach. OpenCL and GLSL are used to produce this new multi-histogram approach, leveraging texture memory on the graphics processing unit of desktops to provide a new interactive method for visualizing biomedical images. Performance results for this method are generated and qualitative rendering results are compared. The resulting framework provides the opportunity for further applications in medical imaging, both in volume rendering and in generic image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a novel, powerful segmentation algorithm and an intuitive 3-D visualization tool for the examination of root fractures with minimum user intervention. The application computes and displays a suitable oblique orientation on a selected tooth by placing at least two splines (inside and outside of the tooth) in just one slice of the volume. Next, it allows the user to scroll through the volume, slice-by-slice in parallel to the plane, or to examine the tooth by changing the orientation of a 3-D object plane (called a virtual bitewing), which is placed, at the same time, in a volume rendition. Both the root canal and the root fracture are highlighted during the examination phase. Doctors (end users) are in control to quickly and confidently examine root fractures in 3-D, for any given oblique orientation, without worrying about missing a selected tooth. We have designed and implemented these algorithms using the image foresting transform (IFT) technique for interactive tooth segmentation and used a multi-scale parameter search for automatic oblique orientation estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8671, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.