The users of digital cameras often take multiple photographs of the same scene. Such multiple shots usually have a special
meaning to the photographer, and require further actions, e.g. selection of the best exposure/composition/portrait or stitching
several images into a panorama or composite image. We present a method of fast retrieval of all groups of shots taken from
the same viewpoint. This task is different from the recently emerged near-duplicate detection problem because, in our case,
the multiple shots differ not only by photometric and simple geometric transformations; they can have a little or no overlap,
and large variations of objects may be presented. Therefore, we solve a general multiple image registration problem by extracting
local image descriptors, their matching, and recovering geometric transformation between images. Initially, the
photo-collection is divided in time-based clusters, which are then refined by extracting connected components from the global
image registration graph. The method has been applied to real consumer photo-collections, and we show that depending on
individual camera usage styles, user collections contain from 15% to 90% of photos requiring further attention. The presented
system automates the otherwise manual work of selecting a series of similar images.
This paper presents a multi-image registration method, which aims at recognizing and extracting multiple panoramas from an
unordered set of images without user input. A method for panorama recognition introduced by Lowe and Brown is based
on extraction of a full set of scale invariant image features and fast matching in feature space, followed by post-processing
procedures. We propose a different approach, where the full set of descriptors is not required, and a small number of them are
used to register a pair of images. We propose feature point indexing based on corner strength value. By matching descriptor
pairs with similar corner strengths we update clusters in rotation-scale accumulators, and a probabilistic approach determines
when these clusters are further processed with RANSAC to find inliers of image homography. If the number of inliers and
global similarity between images are sufficient, a fast geometry-guided point matching is performed to improve the accuracy
of registration. A global registration graph, whose node weights are proportional to the image similarity in the area of overlap,
is updated with each new registration. This allows the prediction of undiscovered image registrations by finding the shortest
paths and corresponding transformation chains. We demonstrate our approach using typical image collections containing
multiple panoramic sequences.
In this paper we address the problem of registering images acquired under unknown conditions including acquisition at different times, from different points of view and possibly with different type of sensors, where conventional approaches based on feature correspondence or area correlation are likely to fail or provide unreliable estimates. The result of image registration can be used as initial step for many remote sensing applications such as change detection, terrain reconstruction and image-based sensor navigation. The key idea of the proposed method is to estimate a global parametric transformation between images (e.g. perspective or affine transformation) from a set of local, region-based estimates of rotation-scale-translation (RST) transformation. These RST-transformations form a cluster in rotation-scale space. Each RST-transformation is registered by matching in log-polar space the regions centered at locations of the corresponding interest points. Estimation of the correspondence between interest points is performed simultaneously with registration of the local RST-transformations. Then a sub-set of corresponding points or, equivalently, a sub-set of local RST-transformations is selected by a robust estimation method and a global transformation, which is not biased by outliers, is computed from it. The method is capable of registering images without any a priori knowledge about the transformation between them. The method was tested on many images taken under different conditions by different sensors and on thousands of calibrated image pairs. In all cases the method shows very accurate registration results. We demonstrate the performance of our approach using several datasets and compare it with another state-of-the-art method based on the SIFT descriptor.