A nonparametric method to define a pixel neighborhood in catadioptric images is presented. The method is based on an accurate modeling of the mirror shape by mean of polarization imaging. Unlike most processing methods existing in the literature, this method is nonparametric and enables us to respect the catadioptric image’s anamorphosis. The neighborhood is directly derived from the two polarization parameters: the angle and the degree of polarization. Regardless of the shape of the catadioptric sensor’s mirror (including noncentral configurations), image processing techniques such as image derivation, edge detection, interest point detection, as well as image matching, can be efficiently performed.
We present an ecient measure of overlap between two co-linear segments which considerably decreases the overall computational time of a Segment-based motion estimation and reconstruction algorithm already exist in literature. We also discuss the special cases where sparse sampling of the motion space for initialization of the algorithm does not result in a good solution and suggest to use dense sampling instead to overcome the problem. Finally, we demonstrate our work on a real data set.
The task of recovering the camera motion relative to the environment (ego-motion estimation) is fundamental to
many computer vision applications and this field has witnessed a wide range of approaches to this problem. Usual
approaches are based on point or line correspondences, optical flow or the so-called direct methods. We present
an algorithm for determining 3D motion and structure from one line correspondence between two perspective
images. Classical methods which use supporting lines need at least three images. In this work, however, we
show that only one supporting line correspondence belong to a planar surface in the space is enough to estimate
the camera ego-translation provided the texture on the surface close to the line is enough discriminative. Only
one line correspondence is enough and it is not necessary that two matched line segments contain the projection
of a common part of the corresponding line segment in space. We first recover camera rotation by matching
vanishing points based on the methods already exist in the literature and then recovering the camera translation.
Experimental results on both synthetic and real images prove the functionality of the proposed method.