An iterative series of linear and non-linear transformations can be utilized to map a user supplied position command to a set of self-adapting servo commands. Visual information from the manipulator can be mapped to a current position where it can then be used with the operator's position command to carry out an actuator command mapping. The newly generated actuator commands will change the manipulator position. Visual information about the new position completes a feedback loop that elicits an iterative chain of transformations of the visual information into control commands. Iterative transformations continue until the manipulator is within the desired position tolerance. This concept will dynamically adapt to arbitrary loads or changes in dynamical parameters. A series of transformations are the effective process of neural networks. Such processing architectures are capable of learning these transformations through exemplary inputs. A neural network system is presented that will accomplish the learning and execution of this iterative control scheme. Learning system design issues that arise in these systems are also discussed.
Luis R. Lopez, Luis R. Lopez,
"Visual Feedback For Robotic Manipulations Under Arbitrary Loading", Proc. SPIE 1002, Intelligent Robots and Computer Vision VII, (27 March 1989); doi: 10.1117/12.960332; https://doi.org/10.1117/12.960332