Over the last years, we have seen significant progress with machine learning techniques on various perceptual and control problems. At the same time, building end-to-end, physically situated AI systems that leverage multiple technologies and act autonomously or interact with people in the open world remains a challenging, error-prone and time-consuming engineering task. Numerous challenges stem from the sheer complexity of these systems and are amplified by the lack of appropriate tools for data-driven development, debugging and visualization.
The Platform for Situated Intelligence project focuses on constructing an open, extensible platform to enable the fast development of multimodal, integrative-AI systems. The platform consists of three layers. First, the runtime provides a parallel programming model centered around temporal streams of data, and enables easy development of components and applications using .NET, while retaining the performance properties of natively written, carefully tuned systems. Second, a set of tools allow for multimodal data visualization, annotations, analytics, tuning and machine learning scenarios. Finally, an open ecosystem of components encapsulate various AI technologies and allow for quick compositing of integrative-AI applications.
We have recently released an initial beta version of the project on github. The documentation pages can be found here. More information about the project can also be found in this blog post.