Past approaches on the automatic recognition of human activities have achieved promising results by sensing patterns of physical motion via wireless accelerometers worn on the body and classifying them using supervised or semi-supervised machine learning algorithms. Despite their relative success, once moving beyond demonstrators, these approaches are limited by several problems. For instance, they don’t adapt to changes caused by addition of new activities or variations in the environment; they don’t accommodate the high variability produced by the disparity in how activities are performed across users; and they don’t scale up to a large number of users or activities. The solution to these fundamental problems is critical for systems intended to be deployed in natural settings, particularly, for those that require long-term deployment at a large-scale.
This talk discusses these problems and presents an activity recognition framework using an incremental learning paradigm. The proposed framework allows learning new activities – or more examples of existing activities – in an incremental manner without requiring the entire model to be retrained, it effectively handles within-user variations, and it is able to transfer knowledge among activities and users. It also presents a functional system based on the framework presented which was designed and implemented across a variety of application scenarios (from a social-exergame for children to a long-term data collection of physical activities in free-living settings). Lessons learned from these practical implementations are summarized and discussed.