This paper presents a novel texture description approach, which is robust to variances in rotation, scale and illumination
in images, to classify the texture of images. A limitation with traditional methods is that they are more or less sensitive to
the mentioned changes in images. To overcome this problem, we propose a novel Local Haar Binary Pattern (LHBP)
based framework to ensure invariance in global rotation, scale, and light change. Our method consists of two components:
feature extraction and scale self-adaptive classification. The global rotation invariant LHBP histogram features are
extracted against the variances of illumination and global rotation, and the scale self-adaptive strategy is used for
optimizing the classification of different scale textures. Evaluation results on Outex and Brodatz databases illustrate the
significant advantages of the proposed approach over existing algorithms.
It is a challenging problem to robustly track moving objects from image sequences because of occlusions. Previous
methods did not exploit depth information sufficiently. Based on multiple camera scenes, we propose a 3D
silhouette tracking framework to resolve occlusions and recover the appearances in 3D space, which enhances
tracking effectiveness. In the framework, 2D object silhouettes are initially gained by <i>Snake</i>. Then a <i>Voxel Space
Carving</i> procedure is introduced to simultaneously generate the occlusion model and visual hull of objects. Next,
we adopt <i>Particle Filter </i>to select the valuable parts of occlusion model and combine them with the initial object
silhouettes to generate the updated visual hull. Finally, updated visual hull of the objects are re-projected to
each view to obtain their final contours. The experiments under the public LAB and SCULPTURE datasets
validate the feasibility and effectiveness of our framework.
Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval.
However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The <i>Small Sample Size (3S)</i>
problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage
due to their low computational efficiencies. This paper presents a novel <i>Manifold-based Biased Fisher Discriminant
Analysis (MBFDA)</i> algorithm to address these two issues by transductive semantic learning and keyword filtering. To
address the <i>3S</i> problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve
real-time annotation, a <i>Bias Fisher Discriminant Analysis (BFDA)</i> based semantic feature reduction algorithm is
presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation
methods, <i>MBFDA</i> views image annotation from a novel <i>Eigen</i> semantic feature (which corresponds to keywords)
selection aspect. As demonstrated in experiments, our <i>manifold-based biased Fisher discriminant analysis</i> annotation
algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.