Automatic recognition of human activities and behaviors is still a challenging problem for many reasons, including limited accuracy of the data acquired by sensing devices, high variability of human behaviors, and gap between visual appearance and scene semantics. Symbolic approaches can significantly simplify the analysis and turn raw data into chains of meaningful patterns. This allows getting rid of most of the clutter produced by low-level processing operations, embedding significant contextual information into the data, as well as using simple syntactic approaches to perform the matching between incoming sequences and models. We propose a symbolic approach to learn and detect complex activities through the sequences of atomic actions. Compared to previous methods based on context-free grammars, we introduce several important novelties, such as the capability to learn actions based on both positive and negative samples, the possibility of efficiently retraining the system in the presence of misclassified or unrecognized events, and the use of a parsing procedure that allows correct detection of the activities also when they are concatenated and/or nested one with each other. An experimental validation on three datasets with different characteristics demonstrates the robustness of the approach in classifying complex human behaviors.
Human behavior understanding has attracted the attention of researchers in various fields over the last years. Recognizing behaviors with sufficient accuracy from sensors analysis is still an unsolved problem, because of many reasons, including the low accuracy of the data, differences in the human behaviors as well as the gap between low-level sensors data and high-level scene semantics. In this context, an application that is attracting the interest of both public and industrial entities is the possibility to allow elderly or physically impaired people conducting a normal life at home. Ambient intelligence (AmI) technologies, intended as the possibility of automatically detecting and reacting to the status of the environment and of the persons, is probably the major enabling factor for the achievement of such an ambitious objective. AmI technologies require suitable networks of sensors and actuators, as well as adequate processing and communication technologies. In this paper we propose a solution based on context free grammars for human behavior understanding with an application to assisted living. First, the grammars of the different actions performed by a person in his/her daily life are discovered. Then, a longterm analysis of the behavior is used to generate a control grammar, taking care of the context when an action is performed, and adding semantics. The proposed framework is tested on a dataset acquired in a real environment and compared with state of the art methods already available for the problem considered.
Automatic video analysis and understanding has become a high interest research topic, with applications to video browsing, content-based video indexing, and visual surveillance. However, the automation of this process is still a challenging task, due to clutters produced by low-level processing operations. This common problem can be solved by embedding signi cant contextual information into the data, as well as using simple syntactic approaches to perform the matching between actual sequences and models. In this context we propose a novel framework that employs a symbolic representation of complex activities through sequences of atomic actions based on a weighted Context-Free Grammar.