Paper
28 May 2003 Trajectory recognition using state transition learning
Tadashi Ae, Keiichi Sakai, Keiji Otaka, Nguyen Duy Thien Chuong, Yuuki Obara
Author Affiliations +
Proceedings Volume 5014, Image Processing: Algorithms and Systems II; (2003) https://doi.org/10.1117/12.477724
Event: Electronic Imaging 2003, 2003, Santa Clara, CA, United States
Abstract
The system receives a pattern sequence, i.e., a time-series of consecutive patterns as an input sequence. The set of input sequences are given as a training set, where a category is attached to each input sequence, and a supervised learning is introduced. First, we introduce a state transition model, AST(Abstract State Transition), where the information of speed of moving objects is added to a state transition model. Next, we extend it to the model including a reinforcement learning, because it will be more powerful to learn the sequence from the start to the goal. Last, we extend it to the model of state including a kind of pushdown tape that represents a knowledge behavior, which we call Pushdown Markov Model. The learning procedure is similar to the learning in MDP(Markov Decision Process) by using DP (Dynamic Programming) matching. As a result, we show a reasonable learning-based recognition of a trajectory for human behavior.
© (2003) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Tadashi Ae, Keiichi Sakai, Keiji Otaka, Nguyen Duy Thien Chuong, and Yuuki Obara "Trajectory recognition using state transition learning", Proc. SPIE 5014, Image Processing: Algorithms and Systems II, (28 May 2003); https://doi.org/10.1117/12.477724
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Scanning tunneling microscopy

Vector spaces

Clocks

Computer programming

Evolutionary algorithms

Machine learning

Neural networks

Back to Top