Conventional endoscopic images do not provide quantitative 3-D information. We present an endoscope system that can measure the size and position of an object in real time. Our endoscope contains four laser beam sources and a camera. The procedural steps for 3-D measurements are as follows. First, to obtain the function that maps 2-D coordinates of an image point to its 3-D coordinates in 3-D space, we observe a standard chart with the endoscope lens and determine the correspondence between the image and object height. In addition to the mapping, this function can correct barrel-shaped distortion of endoscopic images. The system detects laser spots on an object surface automatically using a template matching method, and maps the 2-D coordinates of the laser spots to the 3-D coordinates by the triangulation method. Then the system calculates the magnification ratio on the object plane, which is perpendicular to the optical axis and passes the laser spot, so that the system can superimpose a ruler whose scale fits the 3-D coordinates of the object. Thus, physicians can measure the size and position of objects in real time on undistorted images similar to placing rulers on the surface of an organ.
Due to very complex structure of nasal area that is covered by facial bones, a tracking of surgical instruments
on the preoperative CT image is very important for obtaining an improved image guidance as well as preventing
surgical accidents in the paranasal sinus surgery. In this contribution, we present our recently developed an efficient
and compact navigation system for paranasal sinus surgery and its first clinical trial.
In our system, we use an optical-based 3D range imaging device intra-operatively, in order to achieve
registration and a tracking of instruments. Before the intervention, the range image of patient's face is acquired by a
3D range scanner and registered to corresponding surface extracted from the preoperative CT images. The surgical
instrument fitted with spherical markers that also can be measured by range scanning device, is tracked during the
procedure. The main advantages of our system are (a) markerless on the patient's body, (b) an easy semiautomatic
registration, (c) frameless during surgery, thus, it is feasible to update a registration and to restart the tracking when
a patient moves. In this paper, we describe a summary of used techniques in our approach including the benefits and
limitations of the system, experimental results using a precise model based on a human paranasal structure and a
first clinical trial in the surgical room.
We propose to set a 3-D search volume for tracking a 3-D palm motion efficiently using two cameras. If we perform template matching for right and left images independently, two points in two images do not always correspond to each other. Then, we cannot always track the correct 3-D position. Instead of finding the corresponding point in each image, we set the search volume in the 3-D space, not in the 2-D image planes, so that only valid 2-D pairs are considered in the proposed search process. The tracking process is as follows. First, we set the search volume. The 3-D coordinates of the search volume are projected on two in each image plane. We perform template matching at the projected pixel in each image. The similarity of the 3-D position is computed from two dissimilarities in the two images. We search for the position that has the maximum similarity in the search volume, and we obtain the correct correspondence result. We incorporate this technique into our tracking system, and we compare the proposed method with a method that tracks a palm motion without epipolar constraint. Our experimental results show that use of the proposed 3-D search volume makes the method accurate and efficient for tracking the 3-D motion.
The concept of the boundary value problem into image processing is introduced and an image modification technique is presented that works under the condition that the transformation function for a set of pixels is given a priori. For example, if adjoining pictures are taken separately under different illumination and then put together into a single picture, processing the whole picture uniformly can result in some artifacts along the seams, across which image features change abruptly. To resolve this problem, the image features of the border pixels should be transformed to be continuous with the neighboring pictures. Thus, the transformation function for such pixels should be set a priori to meet the above condition, and for the remaining pixels it should be adjusted accordingly. The same technique can be applied to a single picture. For example, if a picture is taken under nonuniform illumination, which causes some regions of the picture to be dark and others to be light, the ransformation function for those regions should be given as boundary conditions. Then the function at any pixel is interpolated from the boundary values. An interactive technique is discussed for giving the boundary conditions and for determining the image transformation function.