{"id":199957,"date":"2014-03-18T15:11:53","date_gmt":"2014-03-18T15:11:53","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/events\/silicon-valley-techfair-2014\/"},"modified":"2025-08-06T12:01:57","modified_gmt":"2025-08-06T19:01:57","slug":"silicon-valley-techfair-2014","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/silicon-valley-techfair-2014\/","title":{"rendered":"Silicon Valley TechFair 2014"},"content":{"rendered":"\n\n<p><strong>Venue:<\/strong> Microsoft Silicon Valley<br \/>\n1065 La Avenida, Building 1<br \/>\nMountain View, CA 94043<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/get-to-know-microsoft-research\/\">Get to know Microsoft Research<\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-203635\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-2014-silicon-valley-techfair-sv-techfair-2014-banner.jpg\" alt=\"sv-techfair-2014-banner.jpg\" width=\"829\" height=\"158\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-2014-silicon-valley-techfair-sv-techfair-2014-banner.jpg 829w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-2014-silicon-valley-techfair-sv-techfair-2014-banner-300x57.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-2014-silicon-valley-techfair-sv-techfair-2014-banner-768x146.jpg 768w\" sizes=\"auto, (max-width: 829px) 100vw, 829px\" \/><\/p>\n<p>From pushing the boundaries of computing beyond the screen, to helping make sense of large scale data sets for scientific discoveries, the development of new ideas and technologies is deeply woven into our DNA.<\/p>\n<p>At the Silicon Valley TechFair 2014 we will share work that spans the use of big data to build local models which enable hyperlocal neighborhood interactions to scientific models intended to predict how changes in the environment will impact our world. Learn more about the research underpinnings behind Cortana, and explore how we see new user interfaces expanding beyond the screen.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Researchers presenting work<\/h2>\n<p>Here are a few of the researchers who will be presenting their latest research:<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6296\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6296\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6295\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLarry Heck\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6295\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6296\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Larry Heck is a Distinguished Engineering in Microsoft Research. His research area is natural conversational interaction, focusing on open-domain NLP and dialog, machine learning, multimodal NUI, and inference\/reasoning under uncertainty.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6298\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6298\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6297\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCurtis Wong\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6297\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6298\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Curtis Wong, Principal Researcher at Microsoft Research, is responsible for basic and applied research in media and interaction. He has been granted more than 45 patents in areas such as data visualization, user interaction, interactive television, media browsing and automated cinematography, and is the primary inventor of the worldwide telescope. recently, Curtis has led the effort to enable interactive spatial temporal data visualization as a broad capability for everyone to gain insight into the growing tide of data that is being generated from devices and services.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6300\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6300\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6299\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tIvan Tashev\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6299\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6300\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Ivan Tashev, is a Principal Software Architect at Microsoft Research. His research focuses on multichannel audio signals processing, algorithms for arrays of transducers, processing of signals for enhancement, de-noising, de-reverberation, and statistical processing of audio, biological, radio signals. Ivan was responsible for the audio pipeline architecture, and DSP algorithms in Xbox Kinect, and Kinect for Windows, as well as audio enhancements to Xbox One.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6302\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6302\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6301\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tKati London\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6301\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6302\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>For her work in both real world games and the early Internet of Things, Kati was named one of the &#8220;Top 35 Innovators Under 35&#8221; by MIT&#8217;s Technology Review Magazine (2010), &#8220;Top 100 Most Creative People in Business&#8221; by Fast Company Magazine (2011), and awarded the World Technology Network award in Entertainment (2011). She teaches the graduate course &#8220;Persuasive Technology: Designing the Human&#8221; at NYU&#8217;s ITP, and frequently speaks on online and offline engagement, economies, games, and sensors.<\/p>\n<p>Her work has been covered by Businessweek, the New York Times, Wired, National Geographic, and Glamour Magazine, among others. She has worked with clients including the John S. and James L. Knight Foundation, Foursquare, the United Kingdom&#8217;s Department for Transport, the BBC, Channel 4, the Carnegie Institute, Disney Imagineering, Nike, Discovery Channel, CBS, MTV, and the Peter G. Peterson Foundation. Her work is represented in the permanent collection of MOMA and has been exhibited at the Design Museum of London and Museum of Science & Industry.<\/p>\n<p>Kati is currently a Senior Researcher at Microsoft Research, FUSE (Future User Social Experiences) [Microsoft Research] \/ [FUSE]. Previously, she was Director of Product for Zynga New York and Vice President and Senior Producer at Area\/Code (acquired by Zynga). In 2012 she became Innovator-in-Residence at USC&#8217;s Annenberg School, where she led workshops in Design Patterns for Autonomous Objects.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Projects<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6304\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6304\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6303\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3-D Audio for Telepresence and Virtual Reality\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6303\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6304\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>This research project features two technologies:<\/p>\n<ul>\n<li>Rendered personalized head-related transfer functions (HRTFs), synthesized using anthropometric data tailored to an individual user\u2019s audio input.<\/li>\n<li>Creation of an immersive audio experience using headphones and person\/head tracking through rendered 3-D audio.<\/li>\n<\/ul>\n<p>The project generates personalized HRTFs by scanning a person using a Kinect for Windows device, then using a headset to identify a predefined area. It enables the user to interact with a virtual set of physical objects-such as an AM radio, a manikin, a phone, or a television-that start to play music, speak, and ring. The user can move freely, rotate her head, and approach each individual sound source within a virtual experience.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6306\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6306\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6305\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCortana's Research-Based Foundation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6305\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6306\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Cortana, the world&#8217;s first truly personal digital assistant, is available soon on Windows Phone 8.1. Powered by Bing, Cortana is driven by state-of-the-art algorithms using natural language, machine learning, and contextual signals that benefit from advances incubated at Microsoft Research. This approach uses the massive clickstream-feedback loop of web search, along with additional semantics and data. This approach enables Cortana to grow in breadth of domains that cover the web: from the more common \u201chead\u201d queries and conversations to the less frequent &#8220;tail&#8221;-all at a personal level, empowering Cortana to provide robust personal assistance that becomes even more advanced over time.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/beyond-voice-recognition-windows-phones-cortana-anticipates-your-needs\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6308\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6308\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6307\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFilm, Identify, Track, Tag, Sense, Fly\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6307\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6308\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>These Technology for Nature projects aim to expand dramatically the amount and kind of data we can gather from the natural world. Zootracer uses vision and machine learning to track arbitrary objects from video. Designed to assist environmental scientists, Zootracer, a tool for general use, is complemented by Mataki, an unprecedentedly cheap, light (seven grams), and reprogrammable GPS tracking and sensing device. Uniquely, Mataki has peer-to-peer data sharing and, hence, data retrieval that can be achieved on entire collections of device-monitored of animals. The research also uses an unmanned aerial drone with an onboard camera to follow coordinates broadcast by a Mataki device attached to an animal.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/film-identify-track-tag-sense-fly\/\">Video<\/a>\u00a0| <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/zootracer\/\">Project page<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6310\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6310\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6309\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFloating Display\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6309\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6310\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Many people are working on near-field user-interface devices that detect hover, gesture, and pose. But there is nothing in the space in front of a device to show the user what to do. This project introduces a floating display that hovers over the device, providing visual cues for gestural interactions.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/floating-display-visual-cues-for-gestural-interactions\/\">Video<\/a>\u00a0| <a href=\"http:\/\/www.microsoft.com\/appliedsciences\/content\/projects\/FloatingDisplay.aspx\">Project page<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6312\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6312\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6311\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tHereHere NYC\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6311\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6312\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>HereHere NYC is a research project that enables neighborhoods to generate opinions based on public data. The project summarizes how your neighborhood, or other New York City neighborhoods of interest, are doing via a daily email digest, neighborhood-specific Twitter feeds, and status updates on a map. The goals are to:<\/p>\n<ul>\n<li>Create compelling stories with data to engage larger communities.<\/li>\n<li>Invent light daily rituals for connecting to the hyperlocal.<\/li>\n<li>Using characterization as a tool to drive data engagement.<\/li>\n<\/ul>\n<p>HereHere uses Project Sentient Data, an early-stage project to explore how interactions can be improved by understanding ecosystems of data in terms of characterization, personalities, and relationships. Sentient Data provides a server and a representational-state-transfer API that enables developers to assign personalities and translate data sets into their relative emotion states.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/herehere-nyc-provides-insight-into-how-your-neighborhood-is-doing\/\">Video<\/a> | <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/herehere.co\/about\">Project page<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6314\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6314\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6313\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tHolograph\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6313\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6314\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Holograph is an interactive, 3-D data-visualization research platform that can render static and dynamic data above or below the plane of the display using a variety of 3-D stereographic techniques. The platform enables rapid exploration, selection, and manipulation of complex, multidimensional data to create and refine natural user-interaction techniques and technologies, with the goal of empowering everyone to understand the growing tide of large, complex data sets.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/holograph-3-d-spatiotemporal-interactive-data-visualization\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6316\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6316\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6315\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImmersive, Collaborative Data Visualization\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6315\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6316\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>This project uses head-mounted displays, Kinect skeletal tracking, a custom hardware controller, and a large display for the public to watch vicariously. Individuals enter a virtual-reality environment created using the WorldWide Telescope and navigate through a virtual 3-D universe in orbit with the International Space Station-or inside a brain cell. The system delivers the capability for an external audience to watch avatars of the individuals exploring the environment and monitor their progress, while explorers can use Kinect and gestures to fly freely through the universe, select data and point out observations to the outside audience.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6318\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6318\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6317\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMonoFusion: Scanning Objects in Real Time with a Single Web Camera\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6317\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6318\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>This project offers a method for creating 3-D scans of arbitrary environments in real time, utilizing only a single RGB camera as the input sensor. The camera could be one already available in a tablet or a phone, or it could be a cheap web camera. No additional input hardware is required. This removes the need for power-intensive active sensors that do not work robustly in natural outdoor lighting. In seconds, a user can generate a compelling 3-D model, which can be used in augmented reality, for 3-D printing, or in computer-aided design.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/monofusion-scanning-objects-in-real-time-with-a-single-web-camera\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6320\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6320\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6319\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNaiad on Azure: Rich, Interactive Cloud Analytics\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6319\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6320\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Naiad is a .NET-based platform for high-throughput, low-latency data analysis. It is suitable for traditional \u201cbig data\u201d processing all the way through to stream processing on real-time data, complex graph analyses, and machine-learning tasks. Using Naiad on Azure enables an analyst to develop an application locally before deploying it seamlessly to the cloud. Several tools have been built atop Naiad, to use Azure to provide interactive analyses over massive data sets. Moreover, Naiad is built with extensibility in mind, providing data analysts with simple interfaces and enabling them to integrate custom business logic when required.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/naiad\/\">Project page<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6322\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6322\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6321\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPlanetary Predictions\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6321\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6322\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Societies and governments around the world want to know how the biosphere is likely to change and what we can do to avoid or mitigate against it. Current models used to provide that information, though, are akin to miniature computer games: black boxes that convey almost no sense of confidence in their reliability. This problem can be addressed objectively by combining machine learning with process-based modeling to enable assessment and comparison of alternative model formulations to identify key sources of uncertainty and, ultimately, to enable probabilistic predictions of the likely consequences of climate and environmental change. Tackling problems of this scale, researchers have designed a solution using a new model-building platform Distribution Modeller and F# can be used to build and share data-constrained, process-based models and deliver their probabilistic predictions on demand through Azure.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/planetary-predictions\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6324\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6324\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6323\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPrinting Interactivity\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6323\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6324\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Presenting two new technologies that empower consumers to design and produce working physical devices customized to their particular needs:<\/p>\n<ul>\n<li>A highly interactive, touch-first 3-D printing app makes 3-D modeling accessible to a broad range of consumers. It uses the 3-D printer support in Windows 8.1 and adds an intuitive, block-based editor.<\/li>\n<li>A technique enabling users to create working interactive devices cheaply and easily, based on recent advances in conductive inks coupled with a new type of modular electronic components.<\/li>\n<\/ul>\n<p>When combined, these two technologies give a glimpse of a future world where it is inexpensive and easy for users to build devices with customized form and function.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/printing-interactivity-customizable-3-d-printing\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6326\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6326\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6325\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSemantic Browsing of Interesting Images on the Web\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6325\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6326\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>This research project enables users to browse the most interesting images on the web using semantically meaningful connections-as opposed to image similarity. The project determines the most interesting images via a data-mining algorithm that locates interesting sentences about the images, detects concepts in these interesting sentences, and uses them to build a graph over the images. The links in this graph enable a user to navigate the images intuitively and compellingly by clicking on concepts in the sentences.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6328\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6328\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6327\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tShape-Writing Enhancements for Windows Phone\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6327\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6328\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Typing on a touchscreen device without having to look at the screen could be extremely useful in cases when it\u2019s either dangerous, interruptive or impolite to text. Microsoft researchers developed a novel UX and robust decoder to shape write in groups of characters. To demonstrate the feasibility of this approach, the research team broke the Guinness World record for touchscreen and blind texting, and worked with the Windows Phone team to include the WordFlow feature in Windows Phone 8.1.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/shape-writing-enhancements-for-windows-phone\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6330\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6330\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6329\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSurroundWeb: Spreading the Web to Multiple Screens\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6329\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6330\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Projectors, tablets, and other devices, combined with depth cameras, enable applications to span multiple \u201cscreens.\u201d This work enables webpages to be experienced outside of your PC monitors to take advantage of all your devices in concert. For example, a karaoke webpage could use your phone, tablet, big-screen television, and projectors to provide an awesome entertainment experience. Webpages also are enabled to \u201csee\u201d objects in the room and respond to them-all while preserving privacy from the owner of the webpage. Such a page could show you an ad near a Red Bull can, but nobody would be able to find out how much Red Bull you drink.\u00a0 <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/surroundweb-least-privilege-for-immersive-web-rooms\/\">Learn more >><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/surroundweb-spreading-the-web-to-multiple-screens\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6332\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6332\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6331\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTempe: Quick Answers from Large Data\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6331\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6332\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Tempe is an interactive system for exploring large data sets. It accelerates faster machine learning by facilitating quick, iterative feature engineering and data understanding. Tempe is a combination of three technologies:<\/p>\n<ul>\n<li>Trill: a high-speed, temporal, progressive-relational stream-processing engine 100 times faster than StreamInsight.<\/li>\n<li>WINQ: a layer that emulates LINQ but provides progressive queries-providing \u201cbest effort\u201d partial answers.<\/li>\n<li>Stat: an interactive, C# integrated development environment that enables users to visualize progressive answers.<\/li>\n<\/ul>\n<p>The combination of these technologies enables users to try and discard queries quickly, enabling much faster exploration of large data sets.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/quick-answers-from-large-data\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6334\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6334\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6333\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tViiBoard: Vision-Enhanced Immersive Interaction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6333\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6334\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>ViiBoard is a system for remote collaboration through a digital whiteboard (PPI) that gives participants an immersive, 3-D experience with enhanced touch capability. ViiBoard emulates writing side by side on a physical whiteboard or, alternatively, on a mirror, through 3-D processing of depth images and life-sized rendering. Additional vision techniques, such as hand-gesture recognition, are integrated to understand users\u2019 intentions before they touch the board, simplifying the interaction with a PPI, especially for content editing and presenting. Compared with standard video conferencing, the ViiBoard provides participants with a better ability to estimate their remote partners\u2019 eye-gaze direction, gesture direction, and intention. These capabilities translate into a heightened sense of being together and a more realistic experience.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/viiboard-vision-enhanced-immersive-interaction\/\">Video #1<\/a>\u00a0| <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/immersive-collaborative-data-visualization\/\">Video #2<\/a>\u00a0| <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/viiboard-vision-enhanced-immersive-interaction-with-touch-board\/\">Project page<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6336\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6336\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6335\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWaveFour: Social Analytics Platform for Businesses\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6335\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6336\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Getting insights into what customers are saying about a product-and the people who are saying it-is important for companies big and small. This project presents an analytics platform atop a real-time social network that can mine user behavior and automatically identify segments where unusual patterns emerge-and provide a possible explanation for the pattern.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6338\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6338\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6337\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWhen Urban Air Quality Meets Big Data\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6337\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6338\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Urban air quality-the concentration of PM2.5-is of great importance in protecting human health. While there are limited air-quality-monitor-stations in a city, air quality varies by location significantly and is influenced by multiple complex factors, such as traffic flow and land use. Consequently, people cannot know the air quality of a location without a monitoring station. This project infers real-time, fine-grained air-quality information throughout a city, based on air-quality data reported by existing monitor stations and a variety of data sources observed in the city, such as meteorology, traffic flow, human mobility, the structure of road networks, and points of interest. This fine-grained air-quality information could help people figure out when and where to go jogging-or when they should shut the window or put on a face mask in locations where air quality is already a daily issue. This could lead to long-term solutions in predicting forthcoming air quality and identifying4 the root cause of air pollution.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/when-urban-air-quality-meets-big-data\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Cortana, the world&#8217;s first truly personal digital assistant, is available soon on Windows Phone 8.1. Powered by Bing, Cortana is driven by state-of-the-art algorithms using natural language, machine learning, and contextual signals that benefit from advances incubated at Microsoft Research. This approach uses the massive clickstream-feedback loop of web search, along with additional semantics and data. This approach enables Cortana to grow in breadth of domains that cover the web: from the more common \u201chead\u201d queries and conversations to the less frequent &#8220;tail&#8221;-all at a personal level, empowering Cortana to provide robust personal assistance that becomes even more advanced over time.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2014-04-17","msr_enddate":"2014-04-17","msr_location":"Microsoft Silicon Valley Campus","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":true,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13556,13562,13563,13552,13554],"msr-region":[197900],"msr-event-type":[],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-199957","msr-event","type-msr-event","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-research-area-data-platform-analytics","msr-research-area-hardware-devices","msr-research-area-human-computer-interaction","msr-region-north-america","msr-locale-en_us"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"Silicon Valley TechFair 2014\",\"backgroundColor\":\"grey\"} \/-->\n\n<!-- wp:msr\/content-tabs --><!-- wp:msr\/content-tab {\"title\":\"Summary\"} --><!-- wp:freeform --><p><strong>Venue:<\/strong> Microsoft Silicon Valley<br \/>\n1065 La Avenida, Building 1<br \/>\nMountain View, CA 94043<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/get-to-know-microsoft-research\/\">Get to know Microsoft Research<\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-203635\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-2014-silicon-valley-techfair-sv-techfair-2014-banner.jpg\" alt=\"sv-techfair-2014-banner.jpg\" width=\"829\" height=\"158\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-2014-silicon-valley-techfair-sv-techfair-2014-banner.jpg 829w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-2014-silicon-valley-techfair-sv-techfair-2014-banner-300x57.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-2014-silicon-valley-techfair-sv-techfair-2014-banner-768x146.jpg 768w\" sizes=\"auto, (max-width: 829px) 100vw, 829px\" \/><\/p>\n<p>From pushing the boundaries of computing beyond the screen, to helping make sense of large scale data sets for scientific discoveries, the development of new ideas and technologies is deeply woven into our DNA.<\/p>\n<p>At the Silicon Valley TechFair 2014 we will share work that spans the use of big data to build local models which enable hyperlocal neighborhood interactions to scientific models intended to predict how changes in the environment will impact our world. Learn more about the research underpinnings behind Cortana, and explore how we see new user interfaces expanding beyond the screen.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Researchers\"} --><!-- wp:freeform --><h2>Researchers presenting work<\/h2>\n<p>Here are a few of the researchers who will be presenting their latest research:<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6296\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6296\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6295\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tLarry Heck\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6295\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6296\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Larry Heck is a Distinguished Engineering in Microsoft Research. His research area is natural conversational interaction, focusing on open-domain NLP and dialog, machine learning, multimodal NUI, and inference\/reasoning under uncertainty.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6298\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6298\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6297\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCurtis Wong\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6297\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6298\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Curtis Wong, Principal Researcher at Microsoft Research, is responsible for basic and applied research in media and interaction. He has been granted more than 45 patents in areas such as data visualization, user interaction, interactive television, media browsing and automated cinematography, and is the primary inventor of the worldwide telescope. recently, Curtis has led the effort to enable interactive spatial temporal data visualization as a broad capability for everyone to gain insight into the growing tide of data that is being generated from devices and services.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6300\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6300\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6299\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tIvan Tashev\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6299\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6300\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Ivan Tashev, is a Principal Software Architect at Microsoft Research. His research focuses on multichannel audio signals processing, algorithms for arrays of transducers, processing of signals for enhancement, de-noising, de-reverberation, and statistical processing of audio, biological, radio signals. Ivan was responsible for the audio pipeline architecture, and DSP algorithms in Xbox Kinect, and Kinect for Windows, as well as audio enhancements to Xbox One.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6302\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6302\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6301\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tKati London\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6301\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6302\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>For her work in both real world games and the early Internet of Things, Kati was named one of the &#8220;Top 35 Innovators Under 35&#8221; by MIT&#8217;s Technology Review Magazine (2010), &#8220;Top 100 Most Creative People in Business&#8221; by Fast Company Magazine (2011), and awarded the World Technology Network award in Entertainment (2011). She teaches the graduate course &#8220;Persuasive Technology: Designing the Human&#8221; at NYU&#8217;s ITP, and frequently speaks on online and offline engagement, economies, games, and sensors.<\/p>\n<p>Her work has been covered by Businessweek, the New York Times, Wired, National Geographic, and Glamour Magazine, among others. She has worked with clients including the John S. and James L. Knight Foundation, Foursquare, the United Kingdom&#8217;s Department for Transport, the BBC, Channel 4, the Carnegie Institute, Disney Imagineering, Nike, Discovery Channel, CBS, MTV, and the Peter G. Peterson Foundation. Her work is represented in the permanent collection of MOMA and has been exhibited at the Design Museum of London and Museum of Science &amp; Industry.<\/p>\n<p>Kati is currently a Senior Researcher at Microsoft Research, FUSE (Future User Social Experiences) [Microsoft Research] \/ [FUSE]. Previously, she was Director of Product for Zynga New York and Vice President and Senior Producer at Area\/Code (acquired by Zynga). In 2012 she became Innovator-in-Residence at USC&#8217;s Annenberg School, where she led workshops in Design Patterns for Autonomous Objects.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Videos\"} --><!-- wp:freeform --><h2>Projects<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6304\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6304\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6303\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3-D Audio for Telepresence and Virtual Reality\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6303\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6304\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>This research project features two technologies:<\/p>\n<ul>\n<li>Rendered personalized head-related transfer functions (HRTFs), synthesized using anthropometric data tailored to an individual user\u2019s audio input.<\/li>\n<li>Creation of an immersive audio experience using headphones and person\/head tracking through rendered 3-D audio.<\/li>\n<\/ul>\n<p>The project generates personalized HRTFs by scanning a person using a Kinect for Windows device, then using a headset to identify a predefined area. It enables the user to interact with a virtual set of physical objects-such as an AM radio, a manikin, a phone, or a television-that start to play music, speak, and ring. The user can move freely, rotate her head, and approach each individual sound source within a virtual experience.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6306\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6306\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6305\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCortana&#039;s Research-Based Foundation\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6305\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6306\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Cortana, the world&#8217;s first truly personal digital assistant, is available soon on Windows Phone 8.1. Powered by Bing, Cortana is driven by state-of-the-art algorithms using natural language, machine learning, and contextual signals that benefit from advances incubated at Microsoft Research. This approach uses the massive clickstream-feedback loop of web search, along with additional semantics and data. This approach enables Cortana to grow in breadth of domains that cover the web: from the more common \u201chead\u201d queries and conversations to the less frequent &#8220;tail&#8221;-all at a personal level, empowering Cortana to provide robust personal assistance that becomes even more advanced over time.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/beyond-voice-recognition-windows-phones-cortana-anticipates-your-needs\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6308\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6308\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6307\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFilm, Identify, Track, Tag, Sense, Fly\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6307\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6308\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>These Technology for Nature projects aim to expand dramatically the amount and kind of data we can gather from the natural world. Zootracer uses vision and machine learning to track arbitrary objects from video. Designed to assist environmental scientists, Zootracer, a tool for general use, is complemented by Mataki, an unprecedentedly cheap, light (seven grams), and reprogrammable GPS tracking and sensing device. Uniquely, Mataki has peer-to-peer data sharing and, hence, data retrieval that can be achieved on entire collections of device-monitored of animals. The research also uses an unmanned aerial drone with an onboard camera to follow coordinates broadcast by a Mataki device attached to an animal.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/film-identify-track-tag-sense-fly\/\">Video<\/a>\u00a0| <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/zootracer\/\">Project page<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6310\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6310\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6309\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFloating Display\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6309\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6310\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Many people are working on near-field user-interface devices that detect hover, gesture, and pose. But there is nothing in the space in front of a device to show the user what to do. This project introduces a floating display that hovers over the device, providing visual cues for gestural interactions.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/floating-display-visual-cues-for-gestural-interactions\/\">Video<\/a>\u00a0| <a href=\"http:\/\/www.microsoft.com\/appliedsciences\/content\/projects\/FloatingDisplay.aspx\">Project page<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6312\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6312\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6311\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tHereHere NYC\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6311\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6312\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>HereHere NYC is a research project that enables neighborhoods to generate opinions based on public data. The project summarizes how your neighborhood, or other New York City neighborhoods of interest, are doing via a daily email digest, neighborhood-specific Twitter feeds, and status updates on a map. The goals are to:<\/p>\n<ul>\n<li>Create compelling stories with data to engage larger communities.<\/li>\n<li>Invent light daily rituals for connecting to the hyperlocal.<\/li>\n<li>Using characterization as a tool to drive data engagement.<\/li>\n<\/ul>\n<p>HereHere uses Project Sentient Data, an early-stage project to explore how interactions can be improved by understanding ecosystems of data in terms of characterization, personalities, and relationships. Sentient Data provides a server and a representational-state-transfer API that enables developers to assign personalities and translate data sets into their relative emotion states.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/herehere-nyc-provides-insight-into-how-your-neighborhood-is-doing\/\">Video<\/a> | <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/herehere.co\/about\">Project page<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6314\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6314\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6313\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tHolograph\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6313\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6314\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Holograph is an interactive, 3-D data-visualization research platform that can render static and dynamic data above or below the plane of the display using a variety of 3-D stereographic techniques. The platform enables rapid exploration, selection, and manipulation of complex, multidimensional data to create and refine natural user-interaction techniques and technologies, with the goal of empowering everyone to understand the growing tide of large, complex data sets.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/holograph-3-d-spatiotemporal-interactive-data-visualization\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6316\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6316\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6315\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tImmersive, Collaborative Data Visualization\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6315\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6316\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>This project uses head-mounted displays, Kinect skeletal tracking, a custom hardware controller, and a large display for the public to watch vicariously. Individuals enter a virtual-reality environment created using the WorldWide Telescope and navigate through a virtual 3-D universe in orbit with the International Space Station-or inside a brain cell. The system delivers the capability for an external audience to watch avatars of the individuals exploring the environment and monitor their progress, while explorers can use Kinect and gestures to fly freely through the universe, select data and point out observations to the outside audience.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6318\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6318\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6317\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMonoFusion: Scanning Objects in Real Time with a Single Web Camera\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6317\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6318\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>This project offers a method for creating 3-D scans of arbitrary environments in real time, utilizing only a single RGB camera as the input sensor. The camera could be one already available in a tablet or a phone, or it could be a cheap web camera. No additional input hardware is required. This removes the need for power-intensive active sensors that do not work robustly in natural outdoor lighting. In seconds, a user can generate a compelling 3-D model, which can be used in augmented reality, for 3-D printing, or in computer-aided design.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/monofusion-scanning-objects-in-real-time-with-a-single-web-camera\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6320\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6320\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6319\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tNaiad on Azure: Rich, Interactive Cloud Analytics\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6319\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6320\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Naiad is a .NET-based platform for high-throughput, low-latency data analysis. It is suitable for traditional \u201cbig data\u201d processing all the way through to stream processing on real-time data, complex graph analyses, and machine-learning tasks. Using Naiad on Azure enables an analyst to develop an application locally before deploying it seamlessly to the cloud. Several tools have been built atop Naiad, to use Azure to provide interactive analyses over massive data sets. Moreover, Naiad is built with extensibility in mind, providing data analysts with simple interfaces and enabling them to integrate custom business logic when required.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/naiad\/\">Project page<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6322\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6322\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6321\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPlanetary Predictions\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6321\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6322\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Societies and governments around the world want to know how the biosphere is likely to change and what we can do to avoid or mitigate against it. Current models used to provide that information, though, are akin to miniature computer games: black boxes that convey almost no sense of confidence in their reliability. This problem can be addressed objectively by combining machine learning with process-based modeling to enable assessment and comparison of alternative model formulations to identify key sources of uncertainty and, ultimately, to enable probabilistic predictions of the likely consequences of climate and environmental change. Tackling problems of this scale, researchers have designed a solution using a new model-building platform Distribution Modeller and F# can be used to build and share data-constrained, process-based models and deliver their probabilistic predictions on demand through Azure.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/planetary-predictions\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6324\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6324\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6323\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tPrinting Interactivity\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6323\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6324\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Presenting two new technologies that empower consumers to design and produce working physical devices customized to their particular needs:<\/p>\n<ul>\n<li>A highly interactive, touch-first 3-D printing app makes 3-D modeling accessible to a broad range of consumers. It uses the 3-D printer support in Windows 8.1 and adds an intuitive, block-based editor.<\/li>\n<li>A technique enabling users to create working interactive devices cheaply and easily, based on recent advances in conductive inks coupled with a new type of modular electronic components.<\/li>\n<\/ul>\n<p>When combined, these two technologies give a glimpse of a future world where it is inexpensive and easy for users to build devices with customized form and function.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/printing-interactivity-customizable-3-d-printing\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6326\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6326\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6325\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSemantic Browsing of Interesting Images on the Web\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6325\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6326\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>This research project enables users to browse the most interesting images on the web using semantically meaningful connections-as opposed to image similarity. The project determines the most interesting images via a data-mining algorithm that locates interesting sentences about the images, detects concepts in these interesting sentences, and uses them to build a graph over the images. The links in this graph enable a user to navigate the images intuitively and compellingly by clicking on concepts in the sentences.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6328\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6328\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6327\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tShape-Writing Enhancements for Windows Phone\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6327\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6328\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Typing on a touchscreen device without having to look at the screen could be extremely useful in cases when it\u2019s either dangerous, interruptive or impolite to text. Microsoft researchers developed a novel UX and robust decoder to shape write in groups of characters. To demonstrate the feasibility of this approach, the research team broke the Guinness World record for touchscreen and blind texting, and worked with the Windows Phone team to include the WordFlow feature in Windows Phone 8.1.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/shape-writing-enhancements-for-windows-phone\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6330\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6330\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6329\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSurroundWeb: Spreading the Web to Multiple Screens\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6329\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6330\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Projectors, tablets, and other devices, combined with depth cameras, enable applications to span multiple \u201cscreens.\u201d This work enables webpages to be experienced outside of your PC monitors to take advantage of all your devices in concert. For example, a karaoke webpage could use your phone, tablet, big-screen television, and projectors to provide an awesome entertainment experience. Webpages also are enabled to \u201csee\u201d objects in the room and respond to them-all while preserving privacy from the owner of the webpage. Such a page could show you an ad near a Red Bull can, but nobody would be able to find out how much Red Bull you drink.\u00a0 <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/surroundweb-least-privilege-for-immersive-web-rooms\/\">Learn more &gt;&gt;<\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/surroundweb-spreading-the-web-to-multiple-screens\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6332\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6332\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6331\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tTempe: Quick Answers from Large Data\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6331\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6332\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Tempe is an interactive system for exploring large data sets. It accelerates faster machine learning by facilitating quick, iterative feature engineering and data understanding. Tempe is a combination of three technologies:<\/p>\n<ul>\n<li>Trill: a high-speed, temporal, progressive-relational stream-processing engine 100 times faster than StreamInsight.<\/li>\n<li>WINQ: a layer that emulates LINQ but provides progressive queries-providing \u201cbest effort\u201d partial answers.<\/li>\n<li>Stat: an interactive, C# integrated development environment that enables users to visualize progressive answers.<\/li>\n<\/ul>\n<p>The combination of these technologies enables users to try and discard queries quickly, enabling much faster exploration of large data sets.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/quick-answers-from-large-data\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6334\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6334\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6333\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tViiBoard: Vision-Enhanced Immersive Interaction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6333\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6334\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>ViiBoard is a system for remote collaboration through a digital whiteboard (PPI) that gives participants an immersive, 3-D experience with enhanced touch capability. ViiBoard emulates writing side by side on a physical whiteboard or, alternatively, on a mirror, through 3-D processing of depth images and life-sized rendering. Additional vision techniques, such as hand-gesture recognition, are integrated to understand users\u2019 intentions before they touch the board, simplifying the interaction with a PPI, especially for content editing and presenting. Compared with standard video conferencing, the ViiBoard provides participants with a better ability to estimate their remote partners\u2019 eye-gaze direction, gesture direction, and intention. These capabilities translate into a heightened sense of being together and a more realistic experience.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/viiboard-vision-enhanced-immersive-interaction\/\">Video #1<\/a>\u00a0| <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/immersive-collaborative-data-visualization\/\">Video #2<\/a>\u00a0| <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/viiboard-vision-enhanced-immersive-interaction-with-touch-board\/\">Project page<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6336\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6336\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6335\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWaveFour: Social Analytics Platform for Businesses\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6335\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6336\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Getting insights into what customers are saying about a product-and the people who are saying it-is important for companies big and small. This project presents an analytics platform atop a real-time social network that can mine user behavior and automatically identify segments where unusual patterns emerge-and provide a possible explanation for the pattern.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-6338\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-6338\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-6337\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWhen Urban Air Quality Meets Big Data\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-6337\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-6338\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Urban air quality-the concentration of PM2.5-is of great importance in protecting human health. While there are limited air-quality-monitor-stations in a city, air quality varies by location significantly and is influenced by multiple complex factors, such as traffic flow and land use. Consequently, people cannot know the air quality of a location without a monitoring station. This project infers real-time, fine-grained air-quality information throughout a city, based on air-quality data reported by existing monitor stations and a variety of data sources observed in the city, such as meteorology, traffic flow, human mobility, the structure of road networks, and points of interest. This fine-grained air-quality information could help people figure out when and where to go jogging-or when they should shut the window or put on a face mask in locations where air quality is already a daily issue. This could lead to long-term solutions in predicting forthcoming air quality and identifying4 the root cause of air pollution.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/when-urban-air-quality-meets-big-data\/\">Video<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- \/wp:msr\/content-tabs -->","tab-content":[{"id":0,"name":"Summary","content":"<img class=\"alignnone size-full wp-image-203635\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-2014-silicon-valley-techfair-sv-techfair-2014-banner.jpg\" alt=\"sv-techfair-2014-banner.jpg\" width=\"829\" height=\"158\" \/>\r\n\r\nFrom pushing the boundaries of computing beyond the screen, to helping make sense of large scale data sets for scientific discoveries, the development of new ideas and technologies is deeply woven into our DNA.\r\n\r\nAt the Silicon Valley TechFair 2014 we will share work that spans the use of big data to build local models which enable hyperlocal neighborhood interactions to scientific models intended to predict how changes in the environment will impact our world. Learn more about the research underpinnings behind Cortana, and explore how we see new user interfaces expanding beyond the screen."},{"id":1,"name":"Researchers","content":"<h2>Researchers presenting work<\/h2>\r\nHere are a few of the researchers who will be presenting their latest research:\r\n\r\n[accordion]\r\n\r\n[panel header=\"Larry Heck\"]\r\nLarry Heck is a Distinguished Engineering in Microsoft Research. His research area is natural conversational interaction, focusing on open-domain NLP and dialog, machine learning, multimodal NUI, and inference\/reasoning under uncertainty.\r\n[\/panel]\r\n\r\n[panel header=\"Curtis Wong\"]\r\nCurtis Wong, Principal Researcher at Microsoft Research, is responsible for basic and applied research in media and interaction. He has been granted more than 45 patents in areas such as data visualization, user interaction, interactive television, media browsing and automated cinematography, and is the primary inventor of the worldwide telescope. recently, Curtis has led the effort to enable interactive spatial temporal data visualization as a broad capability for everyone to gain insight into the growing tide of data that is being generated from devices and services.\r\n[\/panel]\r\n\r\n[panel header=\"Ivan Tashev\"]\r\nIvan Tashev, is a Principal Software Architect at Microsoft Research. His research focuses on multichannel audio signals processing, algorithms for arrays of transducers, processing of signals for enhancement, de-noising, de-reverberation, and statistical processing of audio, biological, radio signals. Ivan was responsible for the audio pipeline architecture, and DSP algorithms in Xbox Kinect, and Kinect for Windows, as well as audio enhancements to Xbox One.\r\n[\/panel]\r\n\r\n[panel header=\"Kati London\"]\r\nFor her work in both real world games and the early Internet of Things, Kati was named one of the \"Top 35 Innovators Under 35\" by MIT's Technology Review Magazine (2010), \"Top 100 Most Creative People in Business\" by Fast Company Magazine (2011), and awarded the World Technology Network award in Entertainment (2011). She teaches the graduate course \"Persuasive Technology: Designing the Human\" at NYU's ITP, and frequently speaks on online and offline engagement, economies, games, and sensors.\r\n\r\nHer work has been covered by Businessweek, the New York Times, Wired, National Geographic, and Glamour Magazine, among others. She has worked with clients including the John S. and James L. Knight Foundation, Foursquare, the United Kingdom's Department for Transport, the BBC, Channel 4, the Carnegie Institute, Disney Imagineering, Nike, Discovery Channel, CBS, MTV, and the Peter G. Peterson Foundation. Her work is represented in the permanent collection of MOMA and has been exhibited at the Design Museum of London and Museum of Science &amp; Industry.\r\n\r\nKati is currently a Senior Researcher at Microsoft Research, FUSE (Future User Social Experiences) [Microsoft Research] \/ [FUSE]. Previously, she was Director of Product for Zynga New York and Vice President and Senior Producer at Area\/Code (acquired by Zynga). In 2012 she became Innovator-in-Residence at USC's Annenberg School, where she led workshops in Design Patterns for Autonomous Objects.\r\n[\/panel]\r\n\r\n[\/accordion]"},{"id":2,"name":"Videos","content":"<h2>Projects<\/h2>\r\n[accordion]\r\n\r\n[panel header=\"3-D Audio for Telepresence and Virtual Reality\"]\r\n\r\nThis research project features two technologies:\r\n<ul>\r\n \t<li>Rendered personalized head-related transfer functions (HRTFs), synthesized using anthropometric data tailored to an individual user\u2019s audio input.<\/li>\r\n \t<li>Creation of an immersive audio experience using headphones and person\/head tracking through rendered 3-D audio.<\/li>\r\n<\/ul>\r\nThe project generates personalized HRTFs by scanning a person using a Kinect for Windows device, then using a headset to identify a predefined area. It enables the user to interact with a virtual set of physical objects-such as an AM radio, a manikin, a phone, or a television-that start to play music, speak, and ring. The user can move freely, rotate her head, and approach each individual sound source within a virtual experience.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Cortana's Research-Based Foundation\"]\r\n\r\nCortana, the world's first truly personal digital assistant, is available soon on Windows Phone 8.1. Powered by Bing, Cortana is driven by state-of-the-art algorithms using natural language, machine learning, and contextual signals that benefit from advances incubated at Microsoft Research. This approach uses the massive clickstream-feedback loop of web search, along with additional semantics and data. This approach enables Cortana to grow in breadth of domains that cover the web: from the more common \u201chead\u201d queries and conversations to the less frequent \"tail\"-all at a personal level, empowering Cortana to provide robust personal assistance that becomes even more advanced over time.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/beyond-voice-recognition-windows-phones-cortana-anticipates-your-needs\/\">Video<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Film, Identify, Track, Tag, Sense, Fly\"]\r\n\r\nThese Technology for Nature projects aim to expand dramatically the amount and kind of data we can gather from the natural world. Zootracer uses vision and machine learning to track arbitrary objects from video. Designed to assist environmental scientists, Zootracer, a tool for general use, is complemented by Mataki, an unprecedentedly cheap, light (seven grams), and reprogrammable GPS tracking and sensing device. Uniquely, Mataki has peer-to-peer data sharing and, hence, data retrieval that can be achieved on entire collections of device-monitored of animals. The research also uses an unmanned aerial drone with an onboard camera to follow coordinates broadcast by a Mataki device attached to an animal.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/film-identify-track-tag-sense-fly\/\">Video<\/a>\u00a0| <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/zootracer\/\">Project page<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Floating Display\"]\r\n\r\nMany people are working on near-field user-interface devices that detect hover, gesture, and pose. But there is nothing in the space in front of a device to show the user what to do. This project introduces a floating display that hovers over the device, providing visual cues for gestural interactions.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/floating-display-visual-cues-for-gestural-interactions\/\">Video<\/a>\u00a0| <a href=\"http:\/\/www.microsoft.com\/appliedsciences\/content\/projects\/FloatingDisplay.aspx\">Project page<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"HereHere NYC\"]\r\n\r\nHereHere NYC is a research project that enables neighborhoods to generate opinions based on public data. The project summarizes how your neighborhood, or other New York City neighborhoods of interest, are doing via a daily email digest, neighborhood-specific Twitter feeds, and status updates on a map. The goals are to:\r\n<ul>\r\n \t<li>Create compelling stories with data to engage larger communities.<\/li>\r\n \t<li>Invent light daily rituals for connecting to the hyperlocal.<\/li>\r\n \t<li>Using characterization as a tool to drive data engagement.<\/li>\r\n<\/ul>\r\nHereHere uses Project Sentient Data, an early-stage project to explore how interactions can be improved by understanding ecosystems of data in terms of characterization, personalities, and relationships. Sentient Data provides a server and a representational-state-transfer API that enables developers to assign personalities and translate data sets into their relative emotion states.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/herehere-nyc-provides-insight-into-how-your-neighborhood-is-doing\/\">Video<\/a> | <a href=\"http:\/\/herehere.co\/about\">Project page<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Holograph\"]\r\n\r\nHolograph is an interactive, 3-D data-visualization research platform that can render static and dynamic data above or below the plane of the display using a variety of 3-D stereographic techniques. The platform enables rapid exploration, selection, and manipulation of complex, multidimensional data to create and refine natural user-interaction techniques and technologies, with the goal of empowering everyone to understand the growing tide of large, complex data sets.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/holograph-3-d-spatiotemporal-interactive-data-visualization\/\">Video<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Immersive, Collaborative Data Visualization\"]\r\n\r\nThis project uses head-mounted displays, Kinect skeletal tracking, a custom hardware controller, and a large display for the public to watch vicariously. Individuals enter a virtual-reality environment created using the WorldWide Telescope and navigate through a virtual 3-D universe in orbit with the International Space Station-or inside a brain cell. The system delivers the capability for an external audience to watch avatars of the individuals exploring the environment and monitor their progress, while explorers can use Kinect and gestures to fly freely through the universe, select data and point out observations to the outside audience.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"MonoFusion: Scanning Objects in Real Time with a Single Web Camera\"]\r\n\r\nThis project offers a method for creating 3-D scans of arbitrary environments in real time, utilizing only a single RGB camera as the input sensor. The camera could be one already available in a tablet or a phone, or it could be a cheap web camera. No additional input hardware is required. This removes the need for power-intensive active sensors that do not work robustly in natural outdoor lighting. In seconds, a user can generate a compelling 3-D model, which can be used in augmented reality, for 3-D printing, or in computer-aided design.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/monofusion-scanning-objects-in-real-time-with-a-single-web-camera\/\">Video<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Naiad on Azure: Rich, Interactive Cloud Analytics\"]\r\n\r\nNaiad is a .NET-based platform for high-throughput, low-latency data analysis. It is suitable for traditional \u201cbig data\u201d processing all the way through to stream processing on real-time data, complex graph analyses, and machine-learning tasks. Using Naiad on Azure enables an analyst to develop an application locally before deploying it seamlessly to the cloud. Several tools have been built atop Naiad, to use Azure to provide interactive analyses over massive data sets. Moreover, Naiad is built with extensibility in mind, providing data analysts with simple interfaces and enabling them to integrate custom business logic when required.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/naiad\/\">Project page<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Planetary Predictions\"]\r\n\r\nSocieties and governments around the world want to know how the biosphere is likely to change and what we can do to avoid or mitigate against it. Current models used to provide that information, though, are akin to miniature computer games: black boxes that convey almost no sense of confidence in their reliability. This problem can be addressed objectively by combining machine learning with process-based modeling to enable assessment and comparison of alternative model formulations to identify key sources of uncertainty and, ultimately, to enable probabilistic predictions of the likely consequences of climate and environmental change. Tackling problems of this scale, researchers have designed a solution using a new model-building platform Distribution Modeller and F# can be used to build and share data-constrained, process-based models and deliver their probabilistic predictions on demand through Azure.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/planetary-predictions\/\">Video<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Printing Interactivity\"]\r\n\r\nPresenting two new technologies that empower consumers to design and produce working physical devices customized to their particular needs:\r\n<ul>\r\n \t<li>A highly interactive, touch-first 3-D printing app makes 3-D modeling accessible to a broad range of consumers. It uses the 3-D printer support in Windows 8.1 and adds an intuitive, block-based editor.<\/li>\r\n \t<li>A technique enabling users to create working interactive devices cheaply and easily, based on recent advances in conductive inks coupled with a new type of modular electronic components.<\/li>\r\n<\/ul>\r\nWhen combined, these two technologies give a glimpse of a future world where it is inexpensive and easy for users to build devices with customized form and function.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/printing-interactivity-customizable-3-d-printing\/\">Video<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Semantic Browsing of Interesting Images on the Web\"]\r\n\r\nThis research project enables users to browse the most interesting images on the web using semantically meaningful connections-as opposed to image similarity. The project determines the most interesting images via a data-mining algorithm that locates interesting sentences about the images, detects concepts in these interesting sentences, and uses them to build a graph over the images. The links in this graph enable a user to navigate the images intuitively and compellingly by clicking on concepts in the sentences.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Shape-Writing Enhancements for Windows Phone\"]\r\n\r\nTyping on a touchscreen device without having to look at the screen could be extremely useful in cases when it\u2019s either dangerous, interruptive or impolite to text. Microsoft researchers developed a novel UX and robust decoder to shape write in groups of characters. To demonstrate the feasibility of this approach, the research team broke the Guinness World record for touchscreen and blind texting, and worked with the Windows Phone team to include the WordFlow feature in Windows Phone 8.1.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/shape-writing-enhancements-for-windows-phone\/\">Video<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"SurroundWeb: Spreading the Web to Multiple Screens\"]\r\n\r\nProjectors, tablets, and other devices, combined with depth cameras, enable applications to span multiple \u201cscreens.\u201d This work enables webpages to be experienced outside of your PC monitors to take advantage of all your devices in concert. For example, a karaoke webpage could use your phone, tablet, big-screen television, and projectors to provide an awesome entertainment experience. Webpages also are enabled to \u201csee\u201d objects in the room and respond to them-all while preserving privacy from the owner of the webpage. Such a page could show you an ad near a Red Bull can, but nobody would be able to find out how much Red Bull you drink.\u00a0 <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/surroundweb-least-privilege-for-immersive-web-rooms\/\">Learn more &gt;&gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/surroundweb-spreading-the-web-to-multiple-screens\/\">Video<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Tempe: Quick Answers from Large Data\"]\r\n\r\nTempe is an interactive system for exploring large data sets. It accelerates faster machine learning by facilitating quick, iterative feature engineering and data understanding. Tempe is a combination of three technologies:\r\n<ul>\r\n \t<li>Trill: a high-speed, temporal, progressive-relational stream-processing engine 100 times faster than StreamInsight.<\/li>\r\n \t<li>WINQ: a layer that emulates LINQ but provides progressive queries-providing \u201cbest effort\u201d partial answers.<\/li>\r\n \t<li>Stat: an interactive, C# integrated development environment that enables users to visualize progressive answers.<\/li>\r\n<\/ul>\r\nThe combination of these technologies enables users to try and discard queries quickly, enabling much faster exploration of large data sets.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/quick-answers-from-large-data\/\">Video<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"ViiBoard: Vision-Enhanced Immersive Interaction\"]\r\n\r\nViiBoard is a system for remote collaboration through a digital whiteboard (PPI) that gives participants an immersive, 3-D experience with enhanced touch capability. ViiBoard emulates writing side by side on a physical whiteboard or, alternatively, on a mirror, through 3-D processing of depth images and life-sized rendering. Additional vision techniques, such as hand-gesture recognition, are integrated to understand users\u2019 intentions before they touch the board, simplifying the interaction with a PPI, especially for content editing and presenting. Compared with standard video conferencing, the ViiBoard provides participants with a better ability to estimate their remote partners\u2019 eye-gaze direction, gesture direction, and intention. These capabilities translate into a heightened sense of being together and a more realistic experience.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/viiboard-vision-enhanced-immersive-interaction\/\">Video #1<\/a>\u00a0| <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/immersive-collaborative-data-visualization\/\">Video #2<\/a>\u00a0| <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/viiboard-vision-enhanced-immersive-interaction-with-touch-board\/\">Project page<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"WaveFour: Social Analytics Platform for Businesses\"]\r\n\r\nGetting insights into what customers are saying about a product-and the people who are saying it-is important for companies big and small. This project presents an analytics platform atop a real-time social network that can mine user behavior and automatically identify segments where unusual patterns emerge-and provide a possible explanation for the pattern.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"When Urban Air Quality Meets Big Data\"]\r\n\r\nUrban air quality-the concentration of PM2.5-is of great importance in protecting human health. While there are limited air-quality-monitor-stations in a city, air quality varies by location significantly and is influenced by multiple complex factors, such as traffic flow and land use. Consequently, people cannot know the air quality of a location without a monitoring station. This project infers real-time, fine-grained air-quality information throughout a city, based on air-quality data reported by existing monitor stations and a variety of data sources observed in the city, such as meteorology, traffic flow, human mobility, the structure of road networks, and points of interest. This fine-grained air-quality information could help people figure out when and where to go jogging-or when they should shut the window or put on a face mask in locations where air quality is already a daily issue. This could lead to long-term solutions in predicting forthcoming air quality and identifying4 the root cause of air pollution.\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/when-urban-air-quality-meets-big-data\/\">Video<\/a>\r\n\r\n[\/panel]\r\n\r\n[\/accordion]"}],"msr_startdate":"2014-04-17","msr_enddate":"2014-04-17","msr_event_time":"","msr_location":"Microsoft Silicon Valley Campus","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"April 17, 2014","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":null,"event_excerpt":"Cortana, the world's first truly personal digital assistant, is available soon on Windows Phone 8.1. Powered by Bing, Cortana is driven by state-of-the-art algorithms using natural language, machine learning, and contextual signals that benefit from advances incubated at Microsoft Research. This approach uses the massive clickstream-feedback loop of web search, along with additional semantics and data. This approach enables Cortana to grow in breadth of domains that cover the web: from the more common \u201chead\u201d&hellip;","msr_research_lab":[],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/199957","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":2,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/199957\/revisions"}],"predecessor-version":[{"id":1147387,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/199957\/revisions\/1147387"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=199957"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=199957"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=199957"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=199957"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=199957"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=199957"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=199957"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=199957"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=199957"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}