Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement.
The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.
A novel imaging system that recommends potential suture placement for anastomosis to surgeons is developed. This is
achieved by a multispectral imaging system coupled with polarizers and image analysis software. We performed
preliminary imaging of ex vivo porcine intestine to evaluate the system. Vulnerable tissue regions including blood
vessels were successfully identified and segmented. Thickness of different tissue areas is visualized. Strategies towards
optimal points for suture placements have been discussed. Preliminary data suggest our imaging platform and analysis
algorithm may be useful in avoiding blood vessels, identifying optimal regions for suture placements to perform safer
operations in possibly reduced time.
Accurate optical characterization of different tissue types is an important tool for potentially guiding surgeons
and enabling automated robotic surgery. Multispectral imaging and analysis have been used in the literature to detect
spectral variations in tissue reflectance that may be visible to the naked eye. Using this technique, hidden structures can
be visualized and analyzed for effective tissue classification. Here, we investigated the feasibility of automated tissue
classification using multispectral tissue analysis. Broadband reflectance spectra (200-1050 nm) were collected from nine
different ex vivo porcine tissues types using an optical fiber-probe based spectrometer system. We created a
mathematical model to train and distinguish different tissue types based upon analysis of the observed spectra using total
principal component regression (TPCR). Compared to other reported methods, our technique is computationally
inexpensive and suitable for real-time implementation. Each of the 92 spectra was cross-referenced against the nine
tissue types. Preliminary results show a mean detection rate of 91.3%, with detection rates of 100% and 70.0% (inner
and outer kidney), 100% and 100% (inner and outer liver), 100% (outer stomach), and 90.9%, 100%, 70.0%, 85.7%
(four different inner stomach areas, respectively). We conclude that automated tissue differentiation using our
multispectral tissue analysis method is feasible in multiple ex vivo tissue specimens. Although measurements were
performed using ex vivo tissues, these results suggest that real-time, in vivo tissue identification during surgery may be
Automating surgery using robots requires robust visual tracking. The surgical environment often has poor light
conditions where several organs have similar visual appearances. In addition, the field of view might be occluded
by blood or tissue. In this paper, the feasibility of near-infrared (NIR) fluorescent marking and imaging for
vision-based robot control is studied. The NIR region of the spectrum has several useful properties including
deep tissue penetration. We study the optical properties of a clinically-approved NIR fluorescent dye, indocyanine
green (ICG), with different concentrations and quantify image positioning error of ICG marker when obstructed
Design of objects that are used in vision-guided robotic systems crucially affects the overall system performance. In this paper, we target the problem of optimal feature point design for a given camera motion profile in robotic eye-in-hand systems. Having the intrinsic camera calibration parameters, the motion profile, and the image Jacobian matrix, a new directional relative motion resolvability measure is introduced. For each known point on the camera trajectory with known camera-to-object relative pose, the proposed measure is evaluated as a separate objective, resulting in a multi-objective
problem. A bounded multi-objective optimization approach is successfully utilized to solve the underconstrained feature design problem. Simulation results show that the motion of the camera is better resolved for the optimally designed object.
In this paper, we focus on the robust feature selection and investigate the application of scale-invariant feature transform (SIFT) in robotic visual servoing (RVS). We consider a camera mounted onto the endpoint of an anthropomorphic manipulator (eye-in-hand configuration).
The objective of such RVS system is to control the pose of the camera so that a desired relative pose between the camera and the object of interest is maintained. It is seen that the SIFT feature point correspondences are not unique and hence those feature points with more than a unique match are disregarded. When the endpoint moves along a trajectory, the robust SIFT feature points are found and then for a similar trajectory the same selected feature points are used to keep track of the object in the current view. The point correspondences of the remaining robust feature points would provide the epipolar geometry of the two scenes, where knowing the camera calibration the motion of the camera is retrieved. The robot joint angle vector is then determined solving the inverse kinematics of the manipulator. We show how to select a set of robust features that are appropriate for the task of visual servoing. Robust SIFT feature points are scale and rotation invariant and effective when the current position of the endpoint is farther than and rotated with respect to the desired position.
In this paper, some applications of a local version of Singular
Value Decomposition (SVD) to texture classification and texture
segmentation are studied. We introduce two measures, obtained from
SVD transform, which capture some of the perceptual and conceptual
features in an image. One of the measures classifies the textures
by their roughness and structures. Experimental results show that
these measures are suitable for texture clustering and image
segmentation and they are robust relative to changes in lighting.
This paper presents an information fusion approach for automatic detection of mid-brain nuclei (caudate, putamen, globus pallidus, and thalamus) from MRI. The method is based on fusion of anatomical information, obtained from brain atlases and expert physicians, into MRI numerical information within a fuzzy framework, employed to model intrinsic uncertainty of problem. First step of this method is segmentation of brain tissues (gray matter, white matter, and cerebrospinal fluid). Physical landmarks such as inter-hemispheric plane alongside numerical information from segmentation step are then used to describe the nuclei. Each nucleus is defined according to a unique description according to physical landmarks and anatomical landmarks, most of which are the previously detected nuclei. Also, a detected nucleus in slice n serves as key landmark to detect same nucleus in slice n+1. These steps construct fuzzy decision maps. Overall decision is made after fusing all of decisions according to a fusion operator. This approach has been implemented to detect caudate, putamen, and thalamus from a sequence of axial T1-weighted brain MRI's. Our experience shows that final nuclei detection results are highly dependent upon primary tissue segmentation. The method is validated by comparing resultant nuclei volumes with those obtained using manual segmentation performed by expert physicians.
The k-means algorithm is widely used to design image codecs using vector quantization (VQ). In this paper, we focus on an adaptive approach to implement a VQ technique using the online version of k-means algorithm, in which the size of the codebook is adapted continuously to the statistical behavior of the image. Based on the statistical analysis of the feature space, a set of thresholds are designed such that those codewords corresponding to the low-density clusters would be removed from the codebook and hence, resulting in a higher bit-rate efficiency. Applications of this approach would be in telemedicine, where sequences of highly correlated medical images, e.g. consecutive brain slices, are transmitted over a low bit-rate channel. We have applied this algorithm on magnetic resonance (MR) images and the simulation results on a sample sequence are given. The proposed method has been compared to the standard k-means algorithm in terms of PSNR, MSE, and elapsed time to complete the algorithm.