Quantitative analysis of three dimentional (3D) blood flow direction and location will benefit and guide the surgical
thinning and dissection process. Toward this goal, this study was performed to reconstruct 3D vascular trees with the
incorporation of temporal information from contrast-agent propagation. A computational technique based on our
previous work to segment the 3D vascular tree structure from the CT scan volume image sets was proposed. This
technique utilizes the deformation method which is a moving grid methodology and which in tradition is used to improve
the computational accuracy and efficiency in solving differential equations. Compared with our previous work, we
extended the moving grid deformation method to 3D and incorporated 3D region growing method for an initial
segmentation. At last, a 3D divergence operator was applied to delineate vascular tree structures from the 3D grid
volume plot. Experimental results show the 3D nature of the vascular structure and four-dimensional (4D) vascular tree
evolving process. The proposed computational framework demonstrates its effectiveness and improvement in the
modeling of 3D vascular tree.
Perforator flaps have been increasingly used in the past few years
for trauma and reconstructive surgical cases. With the thinned perforated flaps, greater survivability and decrease in donor site
morbidity have been reported. Knowledge of the 3D vascular tree will provide insight information about the dissection region, vascular territory, and fascia levels. This paper presents a scheme of shape-based 3D vascular tree reconstruction of perforator flaps for plastic surgery planning, which overcomes the deficiencies of current existing shape-based interpolation methods by applying rotation and 3D repairing. The scheme has the ability to restore the broken parts of the perforator vascular tree by using a probability-based adaptive connection point search (PACPS) algorithm with minimum human intervention. The experimental results evaluated by both synthetic and 39 harvested cadaver perforator flaps show the promise and potential of proposed scheme for plastic surgery planning.
For the majority of object tracking scenario, the emphasis has been put on achieving robust motion estimation under different situations. In this paper, besides introducing a multiple frame based 3D motion
estimation to handle the partial occlusion problem, we present an object shape delineation scheme by dual region growing. As it is well-known, the problem of shape extraction of a moving object undergoing self-occlusion is highly ill-posed. Approaches have been proposed by assuming the similarity of object pixels in the vicinities of the boundaries between the current frame and the previous one. Such an assumption is usually broken down when occlusion occurs; instead, our implementation is based on a stronger assumption. The system consists of (i) a multi-frame motion estimation, (ii) dual-region growing, and (iii) boundary points arbitration. This method is shown to recover favorable object shape during the tracking process.
The validity of feature correspondences plays an important role for feature-correspondence based motion estimation, which leads to the final goal of object tracking. Though different data association methods have been proposed, the problem of feature correspondence is, in general, ill-posed due to either the presences of multiple candidates within search regions or no candidates because of occlusion or other factors. Our research is inspired by how we evaluate the effectiveness of the feature correspondence and how the evaluation will affect motion estimation. The evaluation of template based feature correspondence is achieved by considering the feedback of the latest motion estimation from first visit of Kalman Filtering. Then motion estimation and feature correspondence are re-processed based on evaluation result, which constitutes the second and third visit of Kalman filtering. What makes our work different from others is also that instead of restricting the semantic object tracking in 2D domain, our framework is formulated to recover the 3D depth values of selected features during motion estimation process.
One of the difficulties in semantic object tracking is to trace the object precisely as time going on. In this paper, a system for 3D semantic object motion tracking is proposed. Different form other approaches which have used regular shapes as tracked region, our system starts with a specially designed Color Image Segmentation Editor (CISE) to devise shapes that more accurately describe the region of interest (ROI) to be tracked. CISE is an integration of edge and region detection, which is based on edge-linking, split-and- merge and the energy minimization for active contour detection. An ROI is further segmented into single motion blobs by considering the constancy of the motion parameters in each blob. The tracking of each blob is based on an extended Kalman filter derived form linearization of a constraint equation satisfied by the pinhole model of a camera. The Kalman filter allows the tracker to project the uncertainties associated with the blob feature points to the next frame. Feature points extraction is done by similarity test based on optimized semantic search region. Extracted feature points serially update motion parameters. Experimental results show the different stages of the system.
In this paper, a 3D semantic object motion tracking method based on Kalman filtering is proposed. First, we use a specially designed Color Image Segmentation Editor (CISE) to devise shapes that more accurately describe the object to be tracked. CISE is an integration of edge and region detection, which is based on edge-linking, split-and-merge and the energy minimization for active contour detection. An ROI is further segmented into single motion blobs by considering the constancy of the motion parameters in each blob. Over short time intervals, each blob can be tracked separately and, over longer times, the blobs can be allowed to fragment and coalesce into new blobs as motion evolves. The tracking of each blob is based on a Kalman filter derived from linearization of a constraint equation satisfied by the pinhole model of a camera. The Kalman filter allows the tracker to project the uncertainties associated with a blob center (or with the coordinates of any other features) into the next frame. This projected uncertainty region can then be searched rot eh pixels belonging to the blob. Future work includes investigation of the effects of illumination changes and simultaneous tracking of multiple targets.