Active depth from defocus (DFD) eliminates the main limitation faced by passive DFD, namely its inability to recover depth when dealing with scenes defined by weakly textured (or textureless) objects. This is achieved by projecting a dense illumination pattern onto the scene and depth can be recovered by measuring the local blurring of the projected pattern. Since the illumination pattern forces a strong dominant texture on imaged surfaces, the level of blurring is determined by applying a local operator (tuned on the frequency derived from the illumination pattern) as opposed to the case of window-based passive DFD where a large range of band pass operators are required. The choice of the local operator is a key issue in achieving precise and dense depth estimation. Consequently, in this paper we introduce a new focus operator and we propose refinements to compensate for the problems associated with a suboptimal local operator and a nonoptimized illumination pattern. The developed range sensor has been tested on real images and the results demonstrate that the performance of our range sensor compares well with those achieved by other implementations, where precise and computationally expensive optimization techniques are employed.
Position determination and verification of a mobile robot is a
central theme in robotics research. Several methods have been
proposed for this problem, including the use of visual feedback
information. These vision systems typically aim to extract known
or tracked landmarks from the environment to localize the robot.
Detection and matching these landmarks is often the most
computationally expensive and error prone component of the system.
This paper presents a real-time system for robustly matching
landmarks in complex scenes, with subsequent tracking. The vision
system comprises of a trinocular head, from which corner points
are extracted. These are then matched with respect to robustness
constraints in addition to the trinocular constraints. Finally,
the resulting robustly extracted corners are tracked from frame to
frame to determine the robot's rotational deviations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.