We present an improved approach to another previous work on 3D hand tracking that also uses the Microsoft Kinect sensor. The previous implementation tracks the position, orientation, and full articulation from marker-less visual observations provided by Kinect. As an optimization problem, the objective of hand tracking is to minimize the difference between a hand gesture depth image obtained from Kinect and a hypothesized 3-D hand model. The previous method of relied heavily on the best current frame result, skin detection data, and depth data, often resulting in a "losttrack state" with unrecoverable error, especially when the hand moved faster than the per-frame processing speed. To recover from the lost track state, we use the skeleton joint data from Kinect to determine hand position, instead of relying on skin data. This joint data is also used to limit the search range of our Particle Swarm Optimization (PSO), allowing for a more efficient search. Consequently, the fewer generations required to obtain a result enables us to achieve higher frame-rate processing. The computationally intensive step in matching the observed hand depth with the hypothesized hand pose is accelerated using a GPGPU processor. The proposed method also improves reliability by adding a recovery mechanism for quick hand movements, eliminating the need for manual hand position initialization by a user. Our method does not depend on skin color detection and, therefore, avoids errors common in incorrect or extra skin detection. Thus, a user need not hide arm skin by wearing long-sleeve clothing, for example.