Translator Disclaimer
2 September 1993 Manipulator arm control by neural network with reward/punish learning scheme
Author Affiliations +
In this paper, a neural network with the reward/punish learning scheme is used to control manipulator arms. At each discrete point of the work space, one neuron for each joint is assigned to control the movement of the arm. The inputs to the neuron are the position error and the velocity of the joints. The net-input of the neuron, which is the linear combination of the input and its weight is passed through a Sigmoid function to generate the final output. The output of the neuron is the torque required to control the arm to its desired position. The reward/punish learning mechanism is implemented to adaptively modify the weights. The weights are punished if the previous move was in the wrong direction. Otherwise, the weights are rewarded. By doing this iteratively, the network learns the inverse dynamics of the manipulator without knowing its model or forward dynamics. The neurons can finally output appropriate torques to maintain the manipulator arm at a proper location. Due to the simple learning algorithm, the network learns the inverse dynamics quickly. Therefore, it can be used in real-time applications. A two-link planar manipulator is demonstrated in this paper. The position error and the torque generated for each joint are shown graphically. These figures also show that, after the inverse dynamics of the manipulator is learned, the network moves the arm to its desired position quickly after step disturbances of +/- 2.5 degrees were injected into the system. Although only a 2-DOF is illustrated, the concept can be extended to a 6-DOF system.
© (1993) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jann T. Lin and Rafael M. Inigo "Manipulator arm control by neural network with reward/punish learning scheme", Proc. SPIE 1965, Applications of Artificial Neural Networks IV, (2 September 1993);

Back to Top