Embedded computing devices that interact with humans and the real world hold great promise in making our lives more comfortable and convenient – perhaps allowing independence longer and later in life, or better understand the changes in our natural environment.
The biggest difficulty in taking advantage of these computers is that they need too much assistance from us, starting with configuration, with adapting to new dynamic requirements, and ending in learning from our intent. The ubiquity of computers makes the situation only worse—telling all the little computers what to do is easily harder than simply doing the task yourself.
We claim that the only way to make ubiquitous computing a practicality is to enable the computers to figure out what to do on their own. Observe and learn, or perish.
This paper proposes a framework for automatic configuration and adaptation using learning and prediction based on observed context histories. A software architecture for describing, recording, analyzing, and reacting to physical or computational variables is substantiated with a case study that self-tunes distributed real-time tasks in an entertainment scenario. The measured results are generalized, using stochastic or physical models, to apply to a large number of problems that allow ubiquitous computing to become a reality.