SPIE publishes accepted journal articles as soon as they are approved for publication Journal issues are considered In Progress until all articles for an issue have been published. Articles published ahead of the completed issue are fully citable.
An in-socket sensory system enables the monitoring of transfemoral amputee movement for a microprocessor-controlled prosthetic leg. User movement recognition from an in-socket sensor allows a powered prosthetic leg to actively mimic healthy ambulation, thereby reducing an amputee’s metabolic energy consumption. This study established an adaptive neurofuzzy inference system (ANFIS)-based control input framework from an in-socket sensor signal for gait phase classification to derive user intention as read by in-socket sensor arrays. Particular gait phase recognition was mapped with the cadence and torque control output of a knee joint actuator. The control input framework was validated with 30 experimental gait samples of the in-socket sensory signal of a transfemoral amputee walking at fluctuating speeds of 0 to 2 km · h − 1. The physical simulation of the controller presented a realistic simulation of the actuated knee joint in terms of a knee mechanism with 95% to 99% accuracy of knee cadence and 80% to 90% accuracy of torque compared with those of normal gait. The ANFIS system successfully detected the seven gait phases based on the amputee’s in-socket sensor signals and assigned accurate knee joint torque and cadence values as output.
A belly reconstruction and measurement scheme is conducted on a shape-flexible mannequin with digital image correlation technique. We adopted an integer subpixel image matching process. First, a compound feature including Gaussian combined moment and parameters extracted from gray-level co-occurrence matrix is proposed to track an integer pixel between reference and target deformed images. Second, a mutual learning co-operative particle swarm optimization algorithm is employed to locate the subpixel precisely. Each subpopulation does its own optimization independently. The subpopulations keep information communication and knowledge sharing for co-operative evolution to enhance global searching capability. In addition, the previous optimal information from the former interest point is adopted fully in initializing the particles’ positions of the next interest point, which effectively improves the speed of the convergence. Experimental results indicate that under the high measurement accuracy without any loss, the time-consumption of this scheme is significantly superior to that of the conventional method, particularly at a large number of interest points.
Video-based human action recognition is a challenging task in computer vision. In recent years, the convolution neural network (CNN) and its extended versions have shown promising results for video action recognition. However, most of the existing methods cannot deal with the global motion information effectively, especially for long-term motion which is crucial to represent complex none-periodic actions. To address this issue, a stacked trajectory energy image (STEI) is proposed by extracting trajectories from motion saliency regions and stacked them onto one grayscale image. This will result in an STEI with discriminative texture feature which can effectively characterize the global motion from multiple consecutive frames. Then, a three-stream CNN framework is proposed to simultaneously capture spatial, temporal, and global motion information of the action from RGB frames, optical flow, and STEI. Moreover, a trajectory-aware convolution strategy is introduced by incorporating local and long-term motion information so as to learn the motion features directly and effectively from three complementary action-related regions. Finally, the learned features are aggregated and categorized by a linear support vector machine. The experimental results on two challenging datasets (i.e., HMDB51 and UCF101) demonstrate that our approach statistically outperforms a number of state-of-the-art methods.