1 December 1997 Relative spatial pose estimation for autonomous grasping
Author Affiliations +
Abstract
A technique for finding the relative spatial pose between a robotic end effector and a target object to be grasped without a priori knowledge of the spatial relationship between the camera and the robot is presented. The transformation between the coordinate system of the camera and the coordinate system of the robot is computed dynamically using knowledge about the location of the end effector relative to both the camera and the robot. A previously developed computer vision technique is used to determine the pose of the end effector relative to the camera. The robot geometry and data from the robot controller is used to determine the pose of the end effector relative to the robot. The spatial transformation between the robot end effector and the target object is computed with respect to the robot’s coordinate system. The algorithm was demonstrated using a five-degree-of-freedom robot and an RGB camera system. The camera can be dynamically positioned without concern for an assumed spatial relationship between the camera and robot, enabling optimization of the view of the object and the end effector. Further, the iterative nature of the grasping algorithm reduces the effects of camera calibration errors.
Steve Roach, Steve Roach, Michael Magee, Michael Magee, } "Relative spatial pose estimation for autonomous grasping," Optical Engineering 36(12), (1 December 1997). https://doi.org/10.1117/1.601586 . Submission:
JOURNAL ARTICLE
9 PAGES


SHARE
RELATED CONTENT

Human body motion capture from multi-image video sequences
Proceedings of SPIE (January 09 2003)
3D object recognition using line structure
Proceedings of SPIE (July 08 2011)
Robot Vision Using A Projection Method
Proceedings of SPIE (February 15 1984)
Camera calibration method based on bundle adjustment
Proceedings of SPIE (April 14 2010)

Back to Top