A technique for finding the relative spatial pose between a robotic end effector and a target object to be grasped without a priori knowledge of the spatial relationship between the camera and the robot is presented. The transformation between the coordinate system of the camera and the coordinate system of the robot is computed dynamically using knowledge about the location of the end effector relative to both the camera and the robot. A previously developed computer vision technique is used to determine the pose of the end effector relative to the camera. The robot geometry and data from the robot controller is used to determine the pose of the end effector relative to the robot. The spatial transformation between the robot end effector and the target object is computed with respect to the robot’s coordinate system. The algorithm was demonstrated using a five-degree-of-freedom robot and an RGB camera system. The camera can be dynamically positioned without concern for an assumed spatial relationship between the camera and robot, enabling optimization of the view of the object and the end effector. Further, the iterative nature of the grasping algorithm reduces the effects of camera calibration errors.