In this paper, we propose new method that can classify a human action using Procrustes shape theory. First, we extract a pre-shape configuration vector of landmarks from each frame of an image sequence representing an arbitrary human action, and then we have derived the Procrustes fit vector for pre-shape configuration vector. Second, we extract a set of pre-shape vectors from tanning sample stored at database, and we compute a Procrustes mean shape vector for these preshape vectors. Third, we extract a sequence of the pre-shape vectors from input video, and we project this sequence of pre-shape vectors on the tangent space with respect to the pole taking as a sequence of mean shape vectors corresponding with a target video. And we calculate the Procrustes distance between two sequences of the projection pre-shape vectors on the tangent space and the mean shape vectors. Finally, we classify the input video into the human action class with minimum Procrustes distance. We assess a performance of the proposed method using one public dataset, namely Weizmann human action dataset. Experimental results reveal that the proposed method performs very good on this dataset.
Establishing correspondences between two hyper-graphs is a fundamental issue in computer vision, pattern recognition,
and machine learning. A hyper-graph is modeled by feature set where the complex relations are represented by hyperedges.
Hence, a match between two vertex sets determines a hyper-graph matching problem. We propose a new
bidirectional probabilistic hyper-graph matching method using Bayesian inference principle. First, we formulate the
corresponding hyper-graph matching problem as the maximization of a matching score function over all permutations of
the vertexes. Second, we induce an algebraic relation between the hyper-edge weight matrixes and derive the desired
vertex to vertex probabilistic matching algorithm using Bayes theorem. Third, we apply the well known convex
relaxation procedure with probabilistic soft matching matrix to get a complete hard matching result. Finally, we have
conducted the comparative experiments on synthetic data and real images. Experimental results show that the proposed
method clearly outperforms existing algorithms especially in the presence of noise and outliers.
We present a new segmentation method using the level set framework for the medical volume images. The method has
conducted by the curve evolution model based on the geometric variation principle and the level set theory. And the
speed function in the level set approach consists of hybrid combination of three integral measures that are derived by the
theory of calculus of variation. They are defined by robust alignment term, active region term, and smoothing term.
These measures can help to detect the precise location of the target object and prevent from the boundary leakage
problem. The proposed method has been tested on the various medical volume images with tumor region to evaluate its
performance on visual and quantitative. From the experimental results, an effectiveness and superior performance of our
method is relatively excellent compared with traditional approaches.
This paper presents a method that can extract and visualize anatomical structures from volumetric medical images by
using a 3D level set segmentation method and a hybrid volume rendering technique. First, the segmentation using the
level set method was conducted through a surface evolution framework based on the geometric variation principle. This
approach addresses the topological changes in the deformable surface by using the geometric integral measures and level
set theory. These integral measures contain a robust alignment term, an active region term, and a mean curvature term.
By using the level set method with a new hybrid speed function derived from the geometric integral measures, the
accurate deformable surface can be extracted from a volumetric medical data set. Second, we employed a hybrid volume
rendering approach to visualize the extracted deformable structures. Our method combines indirect and direct volume
rendering techniques. Segmented objects within the data set are rendered locally by surface rendering on an object-by-object
basis. Globally, all the results of subsequent object rendering are obtained by direct volume rendering (DVR).
Then the two rendered results are finally combined in a merging step. This is especially useful when inner structures
should be visualized together with semi-transparent outer parts. This merging step is similar to the focus-plus-context
approach known from information visualization. Finally, we verified the accuracy and robustness of the proposed
segmentation method for various medical volume images. The volume rendering results of segmented 3D objects show
that our proposed method can accurately extract and visualize human organs from various multimodality medical volume
We propose an image registration technique using spatial and intensity information. The registration is conducted by the
use of a measure based on the entropy of conditional probabilities. To achieve the registration, we first define a modified
conditional entropy (MCE) computed from the joint histograms for the area intensities of two given images. In order to
combine the spatial information into a traditional registration measure, we use the gradient vector flow field. Then the
MCE is computed from the gradient vector flow intensity (GVFI) combining the gradient information and their intensity
values of original images. To evaluate the performance of the proposed registration method, we conduct various
experiments with our method as well as existing method based on the mutual information (MI) criteria. We evaluate the
precision of MI- and MCE-based measurements by comparing the registration obtained from MR images and
transformed CT images. The experimental results show that our proposed method is a more accurate technique.
We propose a new registration method that can do the alignment of two medical images using simultaneously the
ordinary Procrustes analysis as well as a maximum likelihood framework with the EM algorithm. In an initial
registration, we first extract the feature points representing the shape information from the boundary of the segmented
object, and then we apply the ordinary Procrustes analysis to register exactly two sets of extracted feature points. For the
final registration, we define a new alignment measure with the log-likelihood function derived by the Bayes theory and
the maximum likelihood method with EM algorithm. In the E-step, we compute the posterior distribution of label
variable by taking expectation for the log-likelihood function. And in the M-step, we derive the estimators for all
parameters by maximizing the log-likelihood function. Then, we can optimize the transformation parameters for the final
image registration by applying iteratively this measure. Finally, we conduct the various experiments to analyze the
accuracy and precision of the proposed method. The experimental results show that our method has great potential power
to register various images given by multimodality instruments.