Group-wise registration has been proposed recently for consistent registration of all images in the same dataset. Since all
images need to be registered simultaneously with lots of deformation parameters to be optimized, the number of images
that the current group-wise registration methods can handle is limited due to the capability of CPU and physical memory
in a general computer. To overcome this limitation, we present a hierarchical group-wise registration method for feasible
registration of large image dataset. Our basic idea is to decompose the large-scale group-wise registration problem into a
series of small-scale registration problems, each of which can be easily solved. In particular, we use a novel affinity
propagation method to hierarchically cluster a group of images into a pyramid of classes. Then, images in the same class
are group-wisely registered to their own center image. The center images of different classes are further group-wisely
registered from the lower level to the upper level of the pyramid. A final atlas for the whole image dataset is thus
synthesized when the registration process reaches the top of the pyramid. By applying this hierarchical image clustering
and atlas synthesis strategy, we can efficiently and effectively perform group-wise registration to a large image dataset
and map each image into the atlas space. More importantly, experimental results on both real and simulated data also
confirm that the proposed method can achieve more robust and accurate registration than the conventional group-wise
registration algorithms.
This paper proposed an approach of online content filtering system, which can filter unexpected content from Internet,
support searching, detecting, recognizing images, video and multimedia data. The approach consists of three parts: first is
texture feature extraction with quasi-Gabor filters. These filters are constructed in different directions and sizes in
frequency domain of images. This avoids convolution and multiplication with images spatially. Second, the extracted
features are sent to Kohonon neural networks to perform decreasing dimension. The outputs of Kohonon network are
then fed to a neural network classifier to get the final classification result. The proposed approach has been applied in our
content monitoring system, which can filter unexpected images and alarm by pre-defined requirement.
Recently, backgrounds modeling methods that employ Time-Adaptive, Per Pixel, and Mixture of Gaussians
(TAPPMOG) model have become more and more popular owing to their intrinsic appealing properties in video
surveillance. Nevertheless, they are not able parse to monitor global changes in the scene, because they model the
background as a set of independent pixel processes. In this paper, Gibbs Distributions-Markov Random Field (GDMRF)
model is applied to the background modeling, and then the Simulated Annealing algorithm is developed to extract the
background from video sequences. Experimental comparison between our methods and a classic pixel-based approach
reveals that our proposed method is really effective in recovering from situations of sudden global illumination changes
of the background, and can perfectly adapt the object moving in the background.
A new method is proposed to extract features from an object for matching and recognition. The features proposed are a combination of local and global characteristics -- local characteristics from the 1-D signature function that is defined to each pixel on the object boundary, global characteristics from the moments that are generated from the signature function. The boundary of the object is first extracted, then the signature function is generated by computing the angle between two lines from every point on the boundary as a function of position along the boundary. This signature function is position, scale and rotation invariant (PSRI). The shape of the signature function is then described quantitatively by using moments. The moments of the signature function are the global characters of a local feature set. Using moments as the eventual features instead of the signature function reduces the time and complexity of an object matching application. Multiscale moments are implemented to produce several sets of moments that will generate more accurate matching. Basically multiscale technique is a coarse to fine procedure and makes the proposed method more robust to noise. This method is proposed to match and recognize objects under simple transformation, such as translation, scale changes, rotation and skewing. A simple logo indexing system is implemented to illustrate the performance of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.