Project Gesture

Project Gesture

Videos

Overview

< Back to Cognitive Research Technologies main page

DocumentationCode SamplesSDK

Getting Started with Project Gesture

Project Gesture is a cutting-edge, easy-to-use SDK that creates more intuitive and natural experiences by allowing users to control and interact with technologies through hand gestures. Based on extensive research, it equips developers and UX designers with the ability to quickly design and implement customized hand gestures into their apps. The SDK enables you to define your desired hand poses using simple constraints built with plain language. Once a gesture is defined and registered in your code, you will get a notification when your user does the gesture and can select an action to assign in response. Using Project Gesture, you can enable your users to intuitively control videos, bookmark webpages, play music, send emojis, or summon a digital assistant. You can even make everyday productivity and communication programs easier to use. Using Project Gesture requires an Intel RealSense SR300 camera or a Microsoft Kinect v2 camera. For more info and updates on Project Gesture, follow us on Twitter at @ProjectPrague.

Learn about the research

Concept Video – The Producer

In this concept video, Magen, an event producer, is getting ready for an art exhibition. She naturally uses gestures to communicate with her workers and now, with Project Gesture, also with her laptop.

The Producer – Behind the Scenes

In the concept video above, Magen, an event producer, is getting ready for an art exhibition. She naturally uses gestures to communicate with her workers and now, with Project Gesture, also with her laptop. In this clip, see how this project’s gesture detection service analyses Magen’s gesture.

Project Gesture – Demos

In this video clip you can see the wide range of scenarios of gesture detection capabilities can help you light up. From productivity to games to video overlays, and many more to come.

Gesture Creation and Recognition

This graphic displays many views of Project Gesture in action. Clockwise from top, you can see: a code snippet where the developer defines the ‘rotate’ gesture; the gesture builder tool, where the developer defines the rotate gesture without writing any code; the control panel, where the developer can view which gestures are being registered by the camera in real-time; the detection service which is viewing the user making the rotate gesture, verifying the user’s gesture intent, and triggering the associated ‘rotate’ action; and finally the end-user using the ‘rotate’ action to rotate an image in PowerPoint.

This graphic displays many views of Project Gesture in action.

 


Cognitive Research Technologies: Terms of Use | Stack Overflow