{"id":199731,"date":"2011-01-31T10:48:32","date_gmt":"2011-01-31T10:48:32","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/events\/techfest-2011\/"},"modified":"2025-08-06T12:02:44","modified_gmt":"2025-08-06T19:02:44","slug":"techfest-2011","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/techfest-2011\/","title":{"rendered":"TechFest 2011"},"content":{"rendered":"\n\n<p>The latest thinking.\u00a0 The freshest ideas.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>TechFest is an annual event, for Microsoft employees and guests, that showcases the most exciting research from Microsoft Research&#8217;s locations around the world.\u00a0 Researchers share their latest work\u2014and the technologies emerging from those efforts.\u00a0 The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.<\/p>\n<p>We invite you to explore the projects and\u00a0watch the videos.\u00a0 Immerse yourself in TechFest content and see how today\u2019s future will become tomorrow\u2019s reality.<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7720\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7720\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7719\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFeature Story\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7719\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7720\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<h2>TechFest Focus: Natural User Interfaces<\/h2>\n<p>By Douglas Gantenbein\u00a0| March 8, 2011 9:00 AM PT<\/p>\n<p>For many people, using a computer still means using a keyboard and a mouse. But computers are becoming more like \u201cus\u201d\u2014better able to anticipate human needs, work with human preferences, even work on our behalf.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-313739 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/TechFest2011_Nui.jpg\" alt=\"techfest2011_nui\" width=\"238\" height=\"69\" \/>Computers, in short, are moving rapidly toward widespread adoption of natural user interfaces (NUIs)\u2014interfaces that are more intuitive, that are easier to use, and that adapt to human habits and wishes, rather than forcing humans to adapt to computers. Microsoft has been a driving force behind the adoption of NUI technology. The wildly successful <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.xbox.com\/en-US\/Kinect\">Kinect for Xbox 360<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> device\u2014launched in November 2010\u2014is a perfect example. It recognizes users, needs no controller to work, and understands what the user wants to do.<\/p>\n<p>It won\u2019t be long before more and more devices work in similar fashion. Microsoft Research is working closely with Microsoft business units to develop new products that take advantage of NUI technology. In the months and years to come, a growing number of Microsoft products will recognize voices and gestures, read facial expressions, and make computing easier, more intuitive, and more productive.<\/p>\n<p>TechFest 2011, Microsoft Research\u2019s annual showcase of forward-looking computer-science technology, will feature several projects that show how the move toward NUIs is progressing. On March 9 and 10, thousands of Microsoft employees will have a chance to view the research on display, talk with the researchers involved, and seek ways to incorporate that work into new products that could be used by millions of people worldwide.<\/p>\n<p>Not all the TechFest projects are NUI-related, of course. Microsoft Research investigates the possibilities in dozens of computer-science areas. But quite a few of the demos to be shown do shine a light on natural user interfaces, and each points to a new way to see or interact with the world. One demo shows how patients\u2019 medical images can be interpreted automatically, enhancing considerably the efficiency of a physician\u2019s work. One literally creates a new world\u2014instantly converting real objects into digital 3-D objects that can be manipulated by a real human hand. A third acts as a virtual drawing coach to would-be artists. And yet another enables a simple digital stylus to understand whether a person wants to draw with it, paint with it, or, perhaps, even play it like a saxophone.<\/p>\n<h2>Semantic Understanding of Medical Images<\/h2>\n<p>Healthcare professionals today are overwhelmed with the amount of medical imagery. X-rays, MRIs, CT, ultrasound, PET scans\u2014all are growing more common as diagnostic tools.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-313724 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/carotids-300x295.jpg\" alt=\"carotids\" width=\"300\" height=\"295\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/carotids-300x295.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/carotids.jpg 350w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/>But the sheer volume of these images also makes it more difficult to read and understand them in a timely fashion. To help make medical images easier to read and analyze, a team from <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-cambridge\/\">Microsoft Research Cambridge<\/a> has created <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/medical-image-analysis\/\">InnerEye<\/a>, a research project that uses the latest machine-learning techniques to speed image interpretation and improve diagnostic accuracy. InnerEye also has implications for improved treatments, such as enabling radiation oncologists to target treatment to tumors more precisely in sensitive areas such as the brain.<\/p>\n<p>In the case of radiation therapy, it can take hours for a radiation oncologist to outline the edge of tumors and healthy organs to be protected. InnerEye\u2014developed by researcher <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/antcrim\/\">Antonio Criminisi<\/a> and a team of colleagues that included Andrew Blake, Ender Konukoglu, Ben Glocker, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/asellen\/\">Abigail Sellen<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tsharp\/\">Toby Sharp<\/a>, and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a>\u2014greatly reduces the time needed to delineate accurately the boundaries of anatomical structures of interest in 3-D.<\/p>\n<p>To use InnerEye, a radiologist or clinician uses a computer pointer on a screen image of a medical scan to highlight a part of the body that requires treatment. InnerEye then employs algorithms developed by Criminisi and his colleagues to accurately define the 3-D surface of the selected organ. In the resulting image, the highlighted organ\u2014a kidney, for instance, or even a complete aorta\u2014seems to almost leap from the rest of the image. The organ delineation offers a quick way of assessing things such as organ volume, tissue density, and other information that aids diagnosis.<\/p>\n<p>InnerEye also enables extremely fast, intuitive visual navigation and inspection of 3-D images. A physician can navigate to an optimized view of the heart simply by clicking on the word \u201cheart,\u201d because the system already knows where each organ is. This yields considerable time savings, with big economic implications.<\/p>\n<p>The InnerEye project team also is investigating the use of Kinect in the operating theater. Surgeons often wish to view a patient\u2019s previously acquired CT or MR scans, but touching a mouse or keyboard could introduce germs. The InnerEye technology and Kinect help by automatically interpreting the surgeon\u2019s hand gestures. This enables the surgeon to navigate naturally through the patient\u2019s images.<\/p>\n<p>InnerEye has numerous potential applications in health care. Its automatic image analysis promises to make the work of surgeons, radiologists, and clinicians much more efficient\u2014and, possibly, more accurate. In cancer treatment, InnerEye could be used to evaluate a tumor quickly and compare it in size and shape with earlier images. The technology also could be used to help assess the number and location of brain lesions caused by multiple sclerosis.<\/p>\n<h2>Blurring the Line Between the Real and the Virtual<\/h2>\n<p>Breaking down the barrier between the real world and the virtual world is a staple of science fiction\u2014Avatar and The Matrix are but two recent examples. But technology is coming closer to actually blurring the line.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-313727 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/mirage_blocks-297x300.jpg\" alt=\"mirage_blocks\" width=\"297\" height=\"300\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/mirage_blocks-297x300.jpg 297w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/mirage_blocks.jpg 350w\" sizes=\"auto, (max-width: 297px) 100vw, 297px\" \/><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-redmond\/\">Microsoft Research Redmond<\/a> researcher <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/benko\/\">Hrvoje Benko<\/a> and senior researcher <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awilson\/\">Andy Wilson<\/a> have taken a step toward making the virtual real with a project called MirageBlocks. Its aim is to simplify the process of digitally capturing images of everyday objects and to convert them instantaneously to 3-D images. The goal is to create a virtual mirror of the physical world, one so readily understood that a MirageBlocks user could take an image of a brick and use it to create a virtual castle\u2014brick by brick.<\/p>\n<p>Capturing and visualizing objects in 3-D long has fascinated scientists, but new technology makes it more feasible. In particular, Kinect for Xbox 360 gave Benko and Wilson\u2014and intern Ricardo Jota\u2014an easy-to-use, $150 gadget that easily could capture the depth of an object with its multicamera design. Coupled with new-generation 3-D projectors and 3-D glasses, Kinect helps make MirageBlocks perhaps the most advanced tool ever for capturing and manipulating 3-D imagery.<\/p>\n<p>The MirageBlocks environment consists of a Kinect device, an Acer H5360 3-D projector, and Nvidia 3D Vision glasses synchronized to the projector\u2019s frame rate. The Kinect captures the object image and tracks the user\u2019s head position so that the virtual image is shown to the user with the correct perspective.<\/p>\n<p>Users enter MirageBlocks\u2019 virtual world by placing an object on a table top, where it is captured by the Kinect\u2019s cameras. The object is instantly digitized and projected back into the workspace as a 3-D virtual image. The user then can move or rotate the virtual object using an actual hand or a numbered keypad. A user can take duplicate objects, or different objects, to construct a virtual 3-D model. To the user, the virtual objects have the same depth and size as their physical counterparts.<\/p>\n<p>MirageBlocks has several real-world applications. It could apply an entirely new dimension to simulation games, enabling game players to create custom models or devices from a few digitized pieces or to digitize any object and place it in a virtual game. MirageBlocks\u2019 technology could change online shopping, enabling the projection of 3-D representations of an object. It could transform teleconferencing, enabling participants to examine and manipulate 3-D representations of products or prototypes. It might even be useful in health care\u2014an emergency-room physician, for instance, could use a 3-D image of a limb with a broken bone to correctly align the break.<\/p>\n<h2>Giving the Artistically Challenged a Helping Hand<\/h2>\n<p>It\u2019s fair to say that most people cannot draw well. But what if a computer could help by suggesting to the would-be artist certain lines to follow or shapes to create? That\u2019s the idea behind ShadowDraw, created by Larry Zitnick\u2014who works as a researcher in the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/interactive-visual-media\/\">Interactive Visual Media Group<\/a> at <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-redmond\/\">Microsoft Research Redmond<\/a>\u2014and principal researcher Michael Cohen, with help from intern Yong Jae Lee from the University of Texas at Austin.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-313733 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers-300x300.jpg\" alt=\"teasers\" width=\"300\" height=\"300\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers-300x300.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers-150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers-180x180.jpg 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers.jpg 350w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/>In concept, ShadowDraw seems disarmingly simple. A user begins drawing an object\u2014a bicycle, for instance, or a face\u2014using a stylus-based Cintiq 21UX tablet. As the drawing progresses, ShadowDraw surmises the subject of the emerging drawing and begins to suggest refinements by generating a \u201cshadow\u201d behind the would-be artist\u2019s lines that resembles the drawn object. By taking advantage of ShadowDraw\u2019s suggestions, the user can create a more refined drawing than otherwise possible, while retaining the individuality of their pencil strokes and overall technique.<\/p>\n<p>The seeming simplicity of ShadowDraw, though, belies the substantial computing power being harnessed behind the screen. ShadowDraw is, at its heart, a database of 30,000 images culled from the Internet and other public sources. Edges are extracted from these original photographic images to provide stroke suggestions to the user.<\/p>\n<p>The main component created by the Microsoft Research team is an interactive drawing system that reacts to the user\u2019s pencil work in real time. ShadowDraw uses a novel, partial-matching approach that finds possible matches between different sub-sections of the user\u2019s drawing and the database of edge images. Think of ShadowDraw\u2019s behind-the-screen interface as a checkerboard\u2014each square where a user draws a line will generate its own set of possible matches that cumulatively vote on suggestions to help refine a user\u2019s work. The researchers also created a novel method for spatially blending the various stroke suggestions for the drawing.<\/p>\n<p>To test ShadowDraw, Zitnick and his co-researchers enlisted eight men and eight women. Each was asked to draw five subjects\u2014a shoe, a bicycle, a butterfly, a face, and a rabbit\u2014with and without ShadowDraw. The rabbit image was a control\u2014there were no rabbits in the database. When using ShadowDraw, the subjects were told they could use the suggested renderings or ignore them. And each subject was given 30 minutes to complete 10 drawings.<\/p>\n<p>A panel of eight additional subjects judged the drawings on a scale of one to five, with one representing \u201cpoor\u201d and five \u201cgood.\u201d The panelists found that ShadowDraw was of significant help to people with average drawing skills\u2014their drawings were significantly improved by ShadowDraw. Interestingly, the subjects rated as having poor or good drawing skills, pre-ShadowDraw, saw little improvement. Zitnick says the poor artists were so bad that ShadowDraw couldn\u2019t even guess what they were attempting to draw. The good artists already had sufficient skills to draw the test objects accurately.<\/p>\n<h2>Enabling One Pen to Simulate Many<\/h2>\n<p>Human beings have developed dozens of ways to render images on a piece of paper, a canvas, or another drawing surface. Pens, pencils, paintbrushes, crayons, and more\u2014all can be used to create images or the written word.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-313730 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/pen_hardware-300x155.jpg\" alt=\"pen_hardware\" width=\"300\" height=\"155\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/pen_hardware-300x155.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/pen_hardware.jpg 350w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/>Each, however, is held in a slightly different way. That can seem natural when using the device itself\u2014people learn to manage a paintbrush in a way different from how they use a pen or a pencil. But those differences can present a challenge when attempting to work with a computer. A single digital stylus or pen can serve many functions, but to do so typically requires the user to hold the stylus in the same manner, regardless of the tool the stylus is mimicking.<\/p>\n<p>A Microsoft Research team aimed to find a better way to design a computer stylus. The team\u2014which included researcher Xiang Cao in the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/human-computer-interaction-msra\/\">Human-Computer Interaction Group<\/a> at <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-asia\/\">Microsoft Research Asia<\/a>; Shahram Izadi of Microsoft Research Cambridge; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/benko\/\">Benko<\/a> and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kenh\/\">Ken Hinckley<\/a>of Microsoft Research Redmond; Minghi Sun, a Microsoft Research Cambridge intern; Hyunyoung Song of the University of Maryland; and Fran\u00e7ois Guimbreti\u00e8re of Cornell University\u2014asked the question: How can a digital pen or stylus be as natural to use as the varied physical tools people employ? The solution, to be shown as part of a demo called Recognizing Pen Grips for Natural UI: A digital pen enhanced with a capacitive, multitouch sensor that knows where the user\u2019s hand touches the pen and an orientation sensor that knows at what angle the pen is held.<\/p>\n<p>With that information, the digital pen can recognize different grips and automatically behave like the desired tool. If a user holds the digital pen like a paintbrush, the pen automatically behaves like a paintbrush. Hold it like a pen, it behaves like a pen\u2013with no need to manually turn a switch on the device or choose a different stylus mode.<\/p>\n<p>The implications of the technology are considerable. Musical instruments such as flutes or saxophones and many other objects all build on similar shapes. A digital stylus with grip and orientation sensors conceivably could duplicate all, while enabling the user to hold the stylus in the manner that is most natural. Even game controllers could be adapted to modify their behavior depending on how they are held, whether as a driving device for auto-based games or as a weapon in games such as <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/halo.xbox.com\/en-us\">Halo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7722\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7722\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7721\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWhat is TechFest?\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7721\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7722\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-201848 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-crowdthumbnail3.jpg\" alt=\"crowdthumbnail3.jpg\" width=\"164\" height=\"200\" \/>The latest thinking.\u00a0\u00a0The freshest ideas<\/strong>.<\/p>\n<p>TechFest is an annual event, for Microsoft employees and guests,\u00a0that showcases the most exciting research from Microsoft Research&#8217;s <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/about\/\">locations<\/a> around the world.\u00a0 Researchers\u00a0share their latest work\u2014and the technologies emerging from those efforts.\u00a0 The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.<\/p>\n<p>We invite you to explore the projects, watch the videos, follow the buzz, and join the discussion on <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.facebook.com\/microsoftresearch\" target=\"_blank\">Facebook<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/x.com\/msftresearch\" target=\"_blank\">Twitter<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.\u00a0 Immerse yourself in TechFest content and see how today\u2019s future will become tomorrow\u2019s reality.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-201847\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-crowd2.jpg\" alt=\"crowd2.jpg\" width=\"525\" height=\"285\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-crowd2.jpg 525w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-crowd2-300x163.jpg 300w\" sizes=\"auto, (max-width: 525px) 100vw, 525px\" \/><\/p>\n<h3>In the News<\/h3>\n<ul type=\"disc\">\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.technologyreview.com\/computing\/35076\/page1\/\" target=\"_blank\">A search engine for the human body<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/seattletimes.nwsource.com\/html\/businesstechnology\/2014437658_techfest09.html\" target=\"_blank\">Microsoft&#8217;s TechFest shows the distant, and near, future<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.geekwire.com\/2011\/microsoft-research-project-aims-artistically-challenged#utm_source=feedburner&utm_medium=twitter&utm_campaign=feed:+geekwire+(geekwire)&utm_content=twitter\" target=\"_blank\">Microsoft Research aims to help the \u2018artistically challenged\u2019<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.zdnet.com\/blog\/microsoft\/at-microsoft-nui-goes-beyond-fun-and-games\/8874\" target=\"_blank\">At Microsoft, NUI goes beyond fun and games<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<\/ul>\n<h3>Discover More<\/h3>\n<ul type=\"disc\">\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/montagepages.fuselabs.com\/public\/RobertMao\/TechFest2011\/e4b5f0c8-f1f6-460c-b83e-0f80f1d87599.htm\" target=\"_blank\">Experience the TechFest 2011 Visual Album with Montage<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/projecting-the-future-of-interaction\/\" target=\"_self\">About Microsoft Research<\/a><\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/event\/techfest-2010\/\" target=\"_self\">TechFest 2010<\/a><\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-201846\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-combinedsigns-300x235.png\" alt=\"combinedsigns.png\" width=\"245\" height=\"192\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-combinedsigns-300x235.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-combinedsigns.png 525w\" sizes=\"auto, (max-width: 245px) 100vw, 245px\" \/><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7724\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7724\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7723\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3-D, Photo-Real Talking Head\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7723\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7724\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Our research showcases a new, 3-D, photo-real talking head with freely controlled head motions and facial expressions. It extends our prior, high-quality, 2-D, photo-real talking head to 3-D. First, we apply a 2-D-to-3-D reconstruction algorithm frame by frame on a 2-D video to construct a 3-D training database. In training, super-feature vectors consisting of 3-D geometry, texture, and speech are formed to train a statistical, multistreamed, Hidden Markov Model (HMM). The HMM then is used to synthesize both the trajectories of geometric animation and dynamic texture. The 3-D talking head can be animated by the geometric trajectory, while the facial expressions and articulator movements are rendered with dynamic texture sequences. Head motions and facial expression also can be separately controlled by manipulating corresponding parameters. The new 3-D talking head has many useful applications, such as voice agents, telepresence, gaming, and speech-to-speech translation. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/techfest-demo-3d-photo-real-talking-head\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7726\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7726\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7725\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3-D Scanning with a Regular Camera\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7725\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7726\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>3-D television is creating a huge buzz in the consumer space, but the generation of 3-D content remains a largely professional endeavor. Our research demonstrates an easy-to-use system for creating photorealistic, 3-D-image-based models simply by walking around an object of interest with your phone, still camera, or video camera. The objects might be your custom car or motorcycle, a wedding cake or dress, a rare musical instrument, or a handcrafted artwork. Our system uses 3-D stereo matching techniques combined with image-based modeling and rendering to create a photorealistic model you can navigate simply by spinning it around on your screen, tablet, or mobile device.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7728\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7728\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7727\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tApplied Sciences Group: Smart Interactive Displays\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7727\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7728\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Steerable AutoStereo 3-D Display:<\/strong> We use a special, flat optical lens (Wedge) behind an LCD monitor to direct a narrow beam of light into each of a viewer\u2019s eyes. By using a Kinect head tracker, the user\u2019s relation to the display is tracked, and thereby, the prototype is able to steer that narrow beam to the user. The combination creates a 3-D image that is steered to the viewer without the need for glasses or holding your head in place.<\/p>\n<p><strong>Steerable Multiview Display:<\/strong> The same optical system used in the 3-D system, Wedge behind an LCD, is used to steer two separate images to two separate people rather than two separate eyes, as in the 3-D case. Using a Kinect head tracker, we find and track multiple viewers and send each viewer his or her own unique image. Therefore, two people can be looking at the same display but see two completely different images. If the two users switch positions, the same image continuously is steered toward them.<\/p>\n<p><strong>Retro-Reflective Air-Gesture Display:<\/strong> Sometimes, it\u2019s better to control with gestures than buttons. Using a retro-reflective screen and a camera close to the projector makes all objects cast a shadow, regardless of their color. This makes it easy to apply computer-vision algorithms to sense above-screen gestures that can be used for control, navigation, and many other applications.<\/p>\n<p><strong>A display that can see:<\/strong> Using the flat Wedge optic in camera mode behind a special, transparent organic-light-emitting-diode display, we can capture images that are both on and above the display. This enables touch and above-screen gesture interfaces, as well as telepresence applications.<\/p>\n<p><strong>Kinect based Virtual Window:<\/strong> Using Kinect, we track a user\u2019s position relative to a 3-D display to create the illusion of looking through a window. This view-dependent-rendered technique is used in both the Wedge 3-D and multiview demos, but the effect is much more apparent in this demo. The user quickly should realize the need for a multiview display, because this illusion is valid for only one user with a conventional display. This technique, along with the Wedge 3-D output and 3-D input techniques we are developing, are the basic building blocks for the ultimate telepresence display. This Magic Window is a bidirectional, light-field, interactive display that gives multiple users in a telepresence session the illusion that they are interacting with and talking to each other through a simple glass window. <a href=\"https:\/\/www.microsoft.com\/appliedsciences\/content\/projects.aspx\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7730\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7730\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7729\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCloud Data Analytics from Excel\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7729\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7730\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Excel is an established data-collection and data-analysis tool in business, technical computing, and academic research. Excel offers an attractive user interface, easy-to-use data entry, and substantial interactivity for what-if analysis. But data in Excel is not readily discoverable and, hence, does not promote data sharing. Moreover, Excel does not offer scalable computation for large-scale analytics. Increasingly, researchers encounter a deluge of data, and when working in Excel, it is not easy to invoke analytics to explore data, find related data sets, or invoke external models. Our project shows how we seamlessly integrate cloud storage and scalable analytics into Excel through a research ribbon. Any analyst can use our tool to discover and import data from the cloud, invoke cloud-scale data analytics to extract information from large data sets, invoke models, and then store data in the cloud\u2014all through a spreadsheet with which they are already familiar. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/excel-datascope-overview\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7732\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7732\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7731\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tControlling Home Heating with Occupancy Prediction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7731\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7732\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Home heating uses more energy than any other residential energy expenditure, making increasing the efficiency of home heating an important goal for saving money and protecting the environment. We have built a home-heating system, PreHeat, that automatically programs your thermostat based on when you are home. PreHeat\u2019s goal is to reduce the amount of time a household\u2019s thermostat needs to be on without compromising the comfort of household members. PreHeat builds a predictive model of when the house is occupied and uses the model to optimize when the house is heated, to save energy without sacrificing comfort. Our system consists of Wi-Fi and passive, IR-based occupancy sensors; temperature sensors; heating-system controllers for U.S. forced-air systems and for U.K. water-filled radiators and under-floor heating; and PC-based control software using machine learning to predict schedules based on current and past occupancy. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/preheat-controlling-home-heating-with-occupancy-prediction\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7734\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7734\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7733\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFace Recognition in Video\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7733\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7734\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Face recognition in video is an emerging technology that will have great impact on user experience in fields such as television, gaming, and communication. In the near future, a television or an Xbox will be able to recognize people in the living room, home video will be annotated automatically and become searchable, and TV watchers will be able to get information about an unfamiliar actor, athlete, or singer just by pointing to the person on the screen. Our research showcases the face-recognition technology developed by iLabs. Our technology includes novel algorithms in face detection, recognition, and tracking. The research demonstrates semi-automatic labeling of videos, a novel TV-watching experience using faces in a video as hyperlinks to get more information, and automatic recognition of the person in front of the television, Xbox, or computer.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7736\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7736\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7735\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFuzzy Contact Search for Windows Phone 7\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7735\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7736\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Mobile-phone users typically search for contacts in their contact list by keying in names or email IDs. Users frequently make various types of mistakes, including phonetic, transposition, deletion, and substitution errors, and, in the specific case of mobile phones, the nature of the input mechanism makes mistakes more probable. We propose a fuzzy-contact-search feature to help users find the right contacts despite making mistakes while keying in a query. The feature is based on the novel, hashing-based spelling-correction technology developed by Microsoft Research India. We support many languages, including English, French, German, Italian, Spanish, Portuguese, Polish, Dutch, Japanese, Russian, Arabic, Hebrew, Chinese, Korean, and Hindi. We have built a Windows Phone 7 app to demonstrate our fuzzy contact search. The solution is lightweight and can be used in any client-side contact-search scenario.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7738\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7738\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7737\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tHigh-Performance Cancer Screening\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7737\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7738\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Our research demonstrates high-performance, GPU-based 3-D rendering for colon-cancer screening. The VCViewer provides a gesture-based user interface for the navigation and analysis of 3-D images generated by computed-tomography (CT) scans for colon-cancer screening. This viewer is supported by a server-side volume-rendering engine implemented by Microsoft Research. Our work shows a real-world, life-saving medical application for this engine. In addition, we show high-performance, CPU-based image processing needed to prepare CT colonoscopy images for diagnostic viewing. This processing was developed at the 3-D Imaging Lab at Massachusetts General Hospital and has been adapted for task and data parallelism in joint collaboration with Microsoft Developer and Platform Evangelism, Microsoft Research, and Intel.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7740\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7740\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7739\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tInnerEye: Visual Recognition in the Hospital\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7739\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7740\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Our research shows how a single, underlying image-recognition algorithm can enable a multitude of clinical applications, such as semantic image navigation, multimodal image registration, quality control, content-based image search, and natural user interfaces for surgery being enabled within the Microsoft Amalga unified intelligence system. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/medical-image-analysis\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7742\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7742\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7741\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tInteractive Information Visualizations\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7741\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7742\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Our research presents novel, interactive visualizations to help people understand large amounts of data:<\/p>\n<ul>\n<li>iSketchVis applies the familiar, collaborative features of a whiteboard interface to the accurate data-exploration capabilities of computer-aided data visualization. It enables people to sketch charts and explore their data visually, on a pen-based tablet\u2014or collaboratively, on whiteboards.<\/li>\n<li>NetCharts enables people to analyze large data sets consisting of multiple entity types with multiple attributes. It uses simple charts to show aggregated data. People can explore these aggregates by dragging them out to create new charts.<\/li>\n<li>Sets traditionally are represented by Euler diagrams with bubble-like shapes. This research presents two techniques to simplify Euler diagrams. In addition, we demonstrate LineSets, which uses a single, continuous curve to represent sets. It simplifies set intersections and offers multiple interactions.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7744\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7744\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7743\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMirageBlocks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7743\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7744\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Our research demonstrates the use of 3-D projection, combined with a Kinect depth camera to capture and display 3-D objects. Any physical object brought into the demo can be digitized instantaneously and viewed in 3-D. For example, we show a simple modeling application in which complex 3-D models can be constructed with just a few wooden blocks by digitizing and adding one block at a time. This setup also can be used in telepresence scenarios, in which what is real on your collaborator\u2019s table is virtual\u20143-D projected\u2014on yours, and vice versa. Our work shows how simulating real-world physics behaviors can be used to manipulate virtual 3-D objects. Our research uses a 3-D projector with active shutter glasses.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7746\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7746\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7745\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMobile Photography: Capture, Process, and View\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7745\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7746\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>The mobile phone is becoming the most popular consumer camera. While the benefits are quite clear, the mobile scenario presents several challenges. It is not always easy to capture good photos. Image-processing tools can improve photos after capture, but there are few tools tailored to on-phone image manipulation. We present phone-based image-enhancement tools that are tightly integrated with cloud services. Heavy computation is off-loaded to the cloud, which enables faster results without impacting the phone\u2019s performance.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7748\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7748\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7747\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tProject Emporia: Personalized News\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7747\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7748\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Project Emporia is a personalized news reader offering 250,000 articles daily as discovered through social news feeds. It combines state-of-the-art recommendation systems (Matchbox) with automatic content classification (ClickPredict) to enable users to fine-tune their news channels by category or a custom-keyword channel, combined with &#8220;more-like-this&#8221;\/&#8221;less-like-this&#8221; votes. It is available as a mobile client as well as on the web.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7750\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7750\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7749\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRecognizing Pen Grips for Natural UI\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7749\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7750\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>By enabling multitouch sensing on a digital pen, we can recognize how the user is holding it. In the real world, people hold tools such as pens, paintbrushes, sketching pencils, knives, and compasses differently, and we enable a user to alter the grip on a digital pen to switch between functionalities. This enables a natural UI on the pen\u2014mode switches are no longer necessary. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/recognizing-pen-grips-for-natural-user-interaction\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7752\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7752\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7751\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRich Interactive Narratives\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7751\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7752\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Recent advances in visualization technologies have spawned a potent brew of visually rich applications that enable exploration over potentially large, complex data sets. Examples include GigaPan.org, Photosynth.net, PivotViewer, and WorldWide Telescope. At the same time, the narrative remains a dominant form for generating emotionally captivating content\u2014movies or novels\u2014or imparting complex knowledge, as in textbooks or journals. The Rich Interactive Narratives project aims to combine the compelling, time-tested narrative elements of multimedia storytelling with the information-rich, exploratory nature of the latest generation of information-visualization and -exploration technologies. We approach the problem not as a one-off application, Internet site, or proprietary framework, but rather as a data model that transcends a particular platform or technology. This has the potential of enabling entirely new ways for creating, transforming, augmenting, and presenting rich interactive content. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/rich-interactive-narratives\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7754\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7754\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7753\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tShadowDraw: Interactive Sketching Helper\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7753\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7754\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Do you want to be able to sketch or draw better? ShadowDraw is an interactive assistant for freehand drawing. It automatically recognizes what you\u2019re trying to draw and suggests new pen strokes for you to trace. As you draw new strokes, ShadowDraw refines its models in real time and provides new suggestions. ShadowDraw contains a large database of images with objects that a user might want to draw. The edges from any images that match the user\u2019s current drawing are merged and shown as suggested &#8220;shadow strokes.&#8221; The user then can trace these strokes to improve the drawing. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/shadowdraw-real-time-user-guidance-for-freehand-drawing\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7756\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7756\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7755\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSocial News Search for Companies\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7755\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7756\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Social News Search for Companies uses social public data to build a great news portal for companies. The curation of this page can be crowdsourced to improve the quality of results. We tackle two questions: How can we use social media to provide a rich, topical, searchable, living news dashboard for any given company, and can we build an environment where the curation of the sources of content for a company page is done by the users of the page rather than by an editor? <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/future-social-experiences-fuse-labs\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Watch the TechFest 2011 Videos<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7758\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7758\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7757\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3-D Scanning with a regular camera or phone!\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7757\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7758\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/channel9.msdn.com\/posts\/TechFest-2011-3D-Scanning-with-a-regular-camera-or-phone\">Watch video<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>3-D television is creating a huge buzz in the consumer space, but the generation of 3-D content remains a largely professional endeavor. Our research demonstrates an easy-to-use system for creating photorealistic, 3-D-image-based models simply by walking around an object of interest with your phone, still camera, or video camera. The objects might be your custom car or motorcycle, a wedding cake or dress, a rare musical instrument, or a hand-crafted artwork. Our system uses 3-D stereo matching techniques combined with image-based modeling and rendering to create a photorealistic model you can navigate simply by spinning it around on your screen, tablet, or mobile device.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7760\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7760\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7759\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3-D, Photo-Real Talking Head\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7759\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7760\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/3-d-photo-real-talking-head\/\">Watch video<\/a><\/p>\n<p>Dynamic texture mapping helps bypass the difficulties in rendering soft tissues like lips, tongue, eyes, and wrinkles, moving us one step closer to being able to create a more realistic personal avatar.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7762\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7762\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7761\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tApplied Sciences Group: Smart Interactive Displays\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7761\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7762\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/applied-sciences-group-smart-interactive-displays\/\">Watch video<\/a><\/p>\n<p>Steven Bathiche, Director, Microsoft Applied Sciences, shares his team&#8217;s latest work on the next generation of Smart Interactive Displays.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7764\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7764\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7763\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFacial Recognition in Videos\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7763\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7764\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/channel9.msdn.com\/posts\/TechFest-2011-Facial-Recognition-in-Videos\">Watch video<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>Face recognition in video is an emerging technology that will have great impact on user experience in fields such as television, gaming, and communication. In the near future, a television or an Xbox will be able to recognize people in the living room, home video will be annotated automatically and become searchable, and TV watchers will be able to get information about an unfamiliar actor, athlete, or singer just by pointing to the person on the screen. Our research showcases the face-recognition technology developed by Innovation Labs. Our technology includes novel algorithms in face detection, recognition, and tracking. The research demonstrates semi-automatic labeling of videos, a novel TV-watching experience using faces in a video as hyperlinks to get more information, and automatic recognition of the person in front of the television, Xbox, or computer.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7766\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7766\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7765\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tHigh-Performance Cancer Screening\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7765\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7766\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/high-performance-cancer-screening\/\">Watch video<\/a><\/p>\n<p>See how a high\u2013performance, 3-D rendering engine can be transformed into a real-world, life-saving medical application.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7768\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7768\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7767\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tInnerEye: Visual Recognition in the Hospital\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7767\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7768\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/innereye-visual-recognition-in-the-hospital\/\">Watch video<\/a><\/p>\n<p>InnerEye focuses on the analysis of patient scans using machine learning techniques for automatic detection and segmentation of healthy anatomy as well as anomalies.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7770\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7770\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7769\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMirageBlocks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7769\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7770\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/miragetable-freehand-interaction-on-a-projected-augmented-reality-tabletop\/\">Watch video<\/a><\/p>\n<p>See how simulating real-world physics behaviors can be used to manipulate virtual 3-D objects using 3-D projection and a Kinect depth camera.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7772\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7772\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7771\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMobile Photography-Capture, process and View\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7771\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7772\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/channel9.msdn.com\/posts\/TechFest-2011-Mobile-Photography-Capture-process-and-View\">Watch video<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>The mobile phone is becoming the most popular consumer camera. While the benefits are quite clear, the mobile scenario presents several challenges. It is not always easy to capture good photos. Image-processing tools can improve photos after capture, but there are few tools tailored to on-phone image manipulation. We present phone-based image enhancement tools that are tightly integrated with cloud services. Heavy computation is off-loaded to the cloud, which enables faster results without impacting the phone\u2019s performance.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7774\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7774\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7773\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tShadowDraw\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7773\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7774\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/shadowdraw-real-time-user-guidance-for-freehand-drawing\/\">Watch video<\/a><\/p>\n<p>An object-oriented research project delivers an interactive assistant for freehand drawing by recognizing what you\u2019re trying to draw and suggesting traceable pen strokes to improve your drawing.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7776\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7776\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7775\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tA montage of impact made by Microsoft Research\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7775\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7776\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/a-montage-of-impact-made-by-microsoft-research\/\">Watch video<\/a><\/p>\n<p>Nearly every product that Microsoft ships includes technology from Microsoft Research. Through exploration and collaboration with product groups and academic institutions, Microsoft Research advances the state of the art of computing.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>TechFest is an annual event, for Microsoft employees and guests, that showcases the most exciting research from Microsoft Research&#8217;s locations around the world.  Researchers share their latest work\u2014and the technologies emerging from those efforts.  The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2011-03-08","msr_enddate":"2011-03-08","msr_location":"Redmond, WA, U.S.","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":true,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13562],"msr-region":[256048],"msr-event-type":[197941,197944],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-199731","msr-event","type-msr-event","status-publish","hentry","msr-research-area-computer-vision","msr-region-global","msr-event-type-conferences","msr-event-type-hosted-by-microsoft","msr-locale-en_us"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"TechFest 2011\",\"backgroundColor\":\"grey\"} \/-->\n\n<!-- wp:msr\/content-tabs --><!-- wp:msr\/content-tab {\"title\":\"Summary\"} --><!-- wp:freeform --><p>The latest thinking.\u00a0 The freshest ideas.<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>TechFest is an annual event, for Microsoft employees and guests, that showcases the most exciting research from Microsoft Research&#8217;s locations around the world.\u00a0 Researchers share their latest work\u2014and the technologies emerging from those efforts.\u00a0 The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.<\/p>\n<p>We invite you to explore the projects and\u00a0watch the videos.\u00a0 Immerse yourself in TechFest content and see how today\u2019s future will become tomorrow\u2019s reality.<\/p>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7720\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7720\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7719\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFeature Story\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7719\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7720\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<h2>TechFest Focus: Natural User Interfaces<\/h2>\n<p>By Douglas Gantenbein\u00a0| March 8, 2011 9:00 AM PT<\/p>\n<p>For many people, using a computer still means using a keyboard and a mouse. But computers are becoming more like \u201cus\u201d\u2014better able to anticipate human needs, work with human preferences, even work on our behalf.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-313739 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/TechFest2011_Nui.jpg\" alt=\"techfest2011_nui\" width=\"238\" height=\"69\" \/>Computers, in short, are moving rapidly toward widespread adoption of natural user interfaces (NUIs)\u2014interfaces that are more intuitive, that are easier to use, and that adapt to human habits and wishes, rather than forcing humans to adapt to computers. Microsoft has been a driving force behind the adoption of NUI technology. The wildly successful <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.xbox.com\/en-US\/Kinect\">Kinect for Xbox 360<\/a> device\u2014launched in November 2010\u2014is a perfect example. It recognizes users, needs no controller to work, and understands what the user wants to do.<\/p>\n<p>It won\u2019t be long before more and more devices work in similar fashion. Microsoft Research is working closely with Microsoft business units to develop new products that take advantage of NUI technology. In the months and years to come, a growing number of Microsoft products will recognize voices and gestures, read facial expressions, and make computing easier, more intuitive, and more productive.<\/p>\n<p>TechFest 2011, Microsoft Research\u2019s annual showcase of forward-looking computer-science technology, will feature several projects that show how the move toward NUIs is progressing. On March 9 and 10, thousands of Microsoft employees will have a chance to view the research on display, talk with the researchers involved, and seek ways to incorporate that work into new products that could be used by millions of people worldwide.<\/p>\n<p>Not all the TechFest projects are NUI-related, of course. Microsoft Research investigates the possibilities in dozens of computer-science areas. But quite a few of the demos to be shown do shine a light on natural user interfaces, and each points to a new way to see or interact with the world. One demo shows how patients\u2019 medical images can be interpreted automatically, enhancing considerably the efficiency of a physician\u2019s work. One literally creates a new world\u2014instantly converting real objects into digital 3-D objects that can be manipulated by a real human hand. A third acts as a virtual drawing coach to would-be artists. And yet another enables a simple digital stylus to understand whether a person wants to draw with it, paint with it, or, perhaps, even play it like a saxophone.<\/p>\n<h2>Semantic Understanding of Medical Images<\/h2>\n<p>Healthcare professionals today are overwhelmed with the amount of medical imagery. X-rays, MRIs, CT, ultrasound, PET scans\u2014all are growing more common as diagnostic tools.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-313724 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/carotids-300x295.jpg\" alt=\"carotids\" width=\"300\" height=\"295\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/carotids-300x295.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/carotids.jpg 350w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/>But the sheer volume of these images also makes it more difficult to read and understand them in a timely fashion. To help make medical images easier to read and analyze, a team from <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-cambridge\/\">Microsoft Research Cambridge<\/a> has created <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/medical-image-analysis\/\">InnerEye<\/a>, a research project that uses the latest machine-learning techniques to speed image interpretation and improve diagnostic accuracy. InnerEye also has implications for improved treatments, such as enabling radiation oncologists to target treatment to tumors more precisely in sensitive areas such as the brain.<\/p>\n<p>In the case of radiation therapy, it can take hours for a radiation oncologist to outline the edge of tumors and healthy organs to be protected. InnerEye\u2014developed by researcher <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/antcrim\/\">Antonio Criminisi<\/a> and a team of colleagues that included Andrew Blake, Ender Konukoglu, Ben Glocker, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/asellen\/\">Abigail Sellen<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tsharp\/\">Toby Sharp<\/a>, and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a>\u2014greatly reduces the time needed to delineate accurately the boundaries of anatomical structures of interest in 3-D.<\/p>\n<p>To use InnerEye, a radiologist or clinician uses a computer pointer on a screen image of a medical scan to highlight a part of the body that requires treatment. InnerEye then employs algorithms developed by Criminisi and his colleagues to accurately define the 3-D surface of the selected organ. In the resulting image, the highlighted organ\u2014a kidney, for instance, or even a complete aorta\u2014seems to almost leap from the rest of the image. The organ delineation offers a quick way of assessing things such as organ volume, tissue density, and other information that aids diagnosis.<\/p>\n<p>InnerEye also enables extremely fast, intuitive visual navigation and inspection of 3-D images. A physician can navigate to an optimized view of the heart simply by clicking on the word \u201cheart,\u201d because the system already knows where each organ is. This yields considerable time savings, with big economic implications.<\/p>\n<p>The InnerEye project team also is investigating the use of Kinect in the operating theater. Surgeons often wish to view a patient\u2019s previously acquired CT or MR scans, but touching a mouse or keyboard could introduce germs. The InnerEye technology and Kinect help by automatically interpreting the surgeon\u2019s hand gestures. This enables the surgeon to navigate naturally through the patient\u2019s images.<\/p>\n<p>InnerEye has numerous potential applications in health care. Its automatic image analysis promises to make the work of surgeons, radiologists, and clinicians much more efficient\u2014and, possibly, more accurate. In cancer treatment, InnerEye could be used to evaluate a tumor quickly and compare it in size and shape with earlier images. The technology also could be used to help assess the number and location of brain lesions caused by multiple sclerosis.<\/p>\n<h2>Blurring the Line Between the Real and the Virtual<\/h2>\n<p>Breaking down the barrier between the real world and the virtual world is a staple of science fiction\u2014Avatar and The Matrix are but two recent examples. But technology is coming closer to actually blurring the line.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-313727 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/mirage_blocks-297x300.jpg\" alt=\"mirage_blocks\" width=\"297\" height=\"300\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/mirage_blocks-297x300.jpg 297w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/mirage_blocks.jpg 350w\" sizes=\"auto, (max-width: 297px) 100vw, 297px\" \/><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-redmond\/\">Microsoft Research Redmond<\/a> researcher <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/benko\/\">Hrvoje Benko<\/a> and senior researcher <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awilson\/\">Andy Wilson<\/a> have taken a step toward making the virtual real with a project called MirageBlocks. Its aim is to simplify the process of digitally capturing images of everyday objects and to convert them instantaneously to 3-D images. The goal is to create a virtual mirror of the physical world, one so readily understood that a MirageBlocks user could take an image of a brick and use it to create a virtual castle\u2014brick by brick.<\/p>\n<p>Capturing and visualizing objects in 3-D long has fascinated scientists, but new technology makes it more feasible. In particular, Kinect for Xbox 360 gave Benko and Wilson\u2014and intern Ricardo Jota\u2014an easy-to-use, $150 gadget that easily could capture the depth of an object with its multicamera design. Coupled with new-generation 3-D projectors and 3-D glasses, Kinect helps make MirageBlocks perhaps the most advanced tool ever for capturing and manipulating 3-D imagery.<\/p>\n<p>The MirageBlocks environment consists of a Kinect device, an Acer H5360 3-D projector, and Nvidia 3D Vision glasses synchronized to the projector\u2019s frame rate. The Kinect captures the object image and tracks the user\u2019s head position so that the virtual image is shown to the user with the correct perspective.<\/p>\n<p>Users enter MirageBlocks\u2019 virtual world by placing an object on a table top, where it is captured by the Kinect\u2019s cameras. The object is instantly digitized and projected back into the workspace as a 3-D virtual image. The user then can move or rotate the virtual object using an actual hand or a numbered keypad. A user can take duplicate objects, or different objects, to construct a virtual 3-D model. To the user, the virtual objects have the same depth and size as their physical counterparts.<\/p>\n<p>MirageBlocks has several real-world applications. It could apply an entirely new dimension to simulation games, enabling game players to create custom models or devices from a few digitized pieces or to digitize any object and place it in a virtual game. MirageBlocks\u2019 technology could change online shopping, enabling the projection of 3-D representations of an object. It could transform teleconferencing, enabling participants to examine and manipulate 3-D representations of products or prototypes. It might even be useful in health care\u2014an emergency-room physician, for instance, could use a 3-D image of a limb with a broken bone to correctly align the break.<\/p>\n<h2>Giving the Artistically Challenged a Helping Hand<\/h2>\n<p>It\u2019s fair to say that most people cannot draw well. But what if a computer could help by suggesting to the would-be artist certain lines to follow or shapes to create? That\u2019s the idea behind ShadowDraw, created by Larry Zitnick\u2014who works as a researcher in the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/interactive-visual-media\/\">Interactive Visual Media Group<\/a> at <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-redmond\/\">Microsoft Research Redmond<\/a>\u2014and principal researcher Michael Cohen, with help from intern Yong Jae Lee from the University of Texas at Austin.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-313733 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers-300x300.jpg\" alt=\"teasers\" width=\"300\" height=\"300\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers-300x300.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers-150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers-180x180.jpg 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers.jpg 350w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/>In concept, ShadowDraw seems disarmingly simple. A user begins drawing an object\u2014a bicycle, for instance, or a face\u2014using a stylus-based Cintiq 21UX tablet. As the drawing progresses, ShadowDraw surmises the subject of the emerging drawing and begins to suggest refinements by generating a \u201cshadow\u201d behind the would-be artist\u2019s lines that resembles the drawn object. By taking advantage of ShadowDraw\u2019s suggestions, the user can create a more refined drawing than otherwise possible, while retaining the individuality of their pencil strokes and overall technique.<\/p>\n<p>The seeming simplicity of ShadowDraw, though, belies the substantial computing power being harnessed behind the screen. ShadowDraw is, at its heart, a database of 30,000 images culled from the Internet and other public sources. Edges are extracted from these original photographic images to provide stroke suggestions to the user.<\/p>\n<p>The main component created by the Microsoft Research team is an interactive drawing system that reacts to the user\u2019s pencil work in real time. ShadowDraw uses a novel, partial-matching approach that finds possible matches between different sub-sections of the user\u2019s drawing and the database of edge images. Think of ShadowDraw\u2019s behind-the-screen interface as a checkerboard\u2014each square where a user draws a line will generate its own set of possible matches that cumulatively vote on suggestions to help refine a user\u2019s work. The researchers also created a novel method for spatially blending the various stroke suggestions for the drawing.<\/p>\n<p>To test ShadowDraw, Zitnick and his co-researchers enlisted eight men and eight women. Each was asked to draw five subjects\u2014a shoe, a bicycle, a butterfly, a face, and a rabbit\u2014with and without ShadowDraw. The rabbit image was a control\u2014there were no rabbits in the database. When using ShadowDraw, the subjects were told they could use the suggested renderings or ignore them. And each subject was given 30 minutes to complete 10 drawings.<\/p>\n<p>A panel of eight additional subjects judged the drawings on a scale of one to five, with one representing \u201cpoor\u201d and five \u201cgood.\u201d The panelists found that ShadowDraw was of significant help to people with average drawing skills\u2014their drawings were significantly improved by ShadowDraw. Interestingly, the subjects rated as having poor or good drawing skills, pre-ShadowDraw, saw little improvement. Zitnick says the poor artists were so bad that ShadowDraw couldn\u2019t even guess what they were attempting to draw. The good artists already had sufficient skills to draw the test objects accurately.<\/p>\n<h2>Enabling One Pen to Simulate Many<\/h2>\n<p>Human beings have developed dozens of ways to render images on a piece of paper, a canvas, or another drawing surface. Pens, pencils, paintbrushes, crayons, and more\u2014all can be used to create images or the written word.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-313730 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/pen_hardware-300x155.jpg\" alt=\"pen_hardware\" width=\"300\" height=\"155\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/pen_hardware-300x155.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/pen_hardware.jpg 350w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/>Each, however, is held in a slightly different way. That can seem natural when using the device itself\u2014people learn to manage a paintbrush in a way different from how they use a pen or a pencil. But those differences can present a challenge when attempting to work with a computer. A single digital stylus or pen can serve many functions, but to do so typically requires the user to hold the stylus in the same manner, regardless of the tool the stylus is mimicking.<\/p>\n<p>A Microsoft Research team aimed to find a better way to design a computer stylus. The team\u2014which included researcher Xiang Cao in the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/human-computer-interaction-msra\/\">Human-Computer Interaction Group<\/a> at <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-asia\/\">Microsoft Research Asia<\/a>; Shahram Izadi of Microsoft Research Cambridge; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/benko\/\">Benko<\/a> and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kenh\/\">Ken Hinckley<\/a>of Microsoft Research Redmond; Minghi Sun, a Microsoft Research Cambridge intern; Hyunyoung Song of the University of Maryland; and Fran\u00e7ois Guimbreti\u00e8re of Cornell University\u2014asked the question: How can a digital pen or stylus be as natural to use as the varied physical tools people employ? The solution, to be shown as part of a demo called Recognizing Pen Grips for Natural UI: A digital pen enhanced with a capacitive, multitouch sensor that knows where the user\u2019s hand touches the pen and an orientation sensor that knows at what angle the pen is held.<\/p>\n<p>With that information, the digital pen can recognize different grips and automatically behave like the desired tool. If a user holds the digital pen like a paintbrush, the pen automatically behaves like a paintbrush. Hold it like a pen, it behaves like a pen\u2013with no need to manually turn a switch on the device or choose a different stylus mode.<\/p>\n<p>The implications of the technology are considerable. Musical instruments such as flutes or saxophones and many other objects all build on similar shapes. A digital stylus with grip and orientation sensors conceivably could duplicate all, while enabling the user to hold the stylus in the manner that is most natural. Even game controllers could be adapted to modify their behavior depending on how they are held, whether as a driving device for auto-based games or as a weapon in games such as <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/halo.xbox.com\/en-us\">Halo<\/a>.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7722\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7722\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7721\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tWhat is TechFest?\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7721\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7722\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-201848 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-crowdthumbnail3.jpg\" alt=\"crowdthumbnail3.jpg\" width=\"164\" height=\"200\" \/>The latest thinking.\u00a0\u00a0The freshest ideas<\/strong>.<\/p>\n<p>TechFest is an annual event, for Microsoft employees and guests,\u00a0that showcases the most exciting research from Microsoft Research&#8217;s <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/about\/\">locations<\/a> around the world.\u00a0 Researchers\u00a0share their latest work\u2014and the technologies emerging from those efforts.\u00a0 The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.<\/p>\n<p>We invite you to explore the projects, watch the videos, follow the buzz, and join the discussion on <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.facebook.com\/microsoftresearch\" target=\"_blank\">Facebook<\/a> and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/x.com\/msftresearch\" target=\"_blank\">Twitter<\/a>.\u00a0 Immerse yourself in TechFest content and see how today\u2019s future will become tomorrow\u2019s reality.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-201847\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-crowd2.jpg\" alt=\"crowd2.jpg\" width=\"525\" height=\"285\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-crowd2.jpg 525w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-crowd2-300x163.jpg 300w\" sizes=\"auto, (max-width: 525px) 100vw, 525px\" \/><\/p>\n<h3>In the News<\/h3>\n<ul type=\"disc\">\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.technologyreview.com\/computing\/35076\/page1\/\" target=\"_blank\">A search engine for the human body<\/a><\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/seattletimes.nwsource.com\/html\/businesstechnology\/2014437658_techfest09.html\" target=\"_blank\">Microsoft&#8217;s TechFest shows the distant, and near, future<\/a><\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.geekwire.com\/2011\/microsoft-research-project-aims-artistically-challenged#utm_source=feedburner&amp;utm_medium=twitter&amp;utm_campaign=feed:+geekwire+(geekwire)&amp;utm_content=twitter\" target=\"_blank\">Microsoft Research aims to help the \u2018artistically challenged\u2019<\/a><\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.zdnet.com\/blog\/microsoft\/at-microsoft-nui-goes-beyond-fun-and-games\/8874\" target=\"_blank\">At Microsoft, NUI goes beyond fun and games<\/a><\/li>\n<\/ul>\n<h3>Discover More<\/h3>\n<ul type=\"disc\">\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/montagepages.fuselabs.com\/public\/RobertMao\/TechFest2011\/e4b5f0c8-f1f6-460c-b83e-0f80f1d87599.htm\" target=\"_blank\">Experience the TechFest 2011 Visual Album with Montage<\/a><\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/projecting-the-future-of-interaction\/\" target=\"_self\">About Microsoft Research<\/a><\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/event\/techfest-2010\/\" target=\"_self\">TechFest 2010<\/a><\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-201846\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-combinedsigns-300x235.png\" alt=\"combinedsigns.png\" width=\"245\" height=\"192\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-combinedsigns-300x235.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-combinedsigns.png 525w\" sizes=\"auto, (max-width: 245px) 100vw, 245px\" \/><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Projects\"} --><!-- wp:freeform --><p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7724\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7724\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7723\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3-D, Photo-Real Talking Head\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7723\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7724\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Our research showcases a new, 3-D, photo-real talking head with freely controlled head motions and facial expressions. It extends our prior, high-quality, 2-D, photo-real talking head to 3-D. First, we apply a 2-D-to-3-D reconstruction algorithm frame by frame on a 2-D video to construct a 3-D training database. In training, super-feature vectors consisting of 3-D geometry, texture, and speech are formed to train a statistical, multistreamed, Hidden Markov Model (HMM). The HMM then is used to synthesize both the trajectories of geometric animation and dynamic texture. The 3-D talking head can be animated by the geometric trajectory, while the facial expressions and articulator movements are rendered with dynamic texture sequences. Head motions and facial expression also can be separately controlled by manipulating corresponding parameters. The new 3-D talking head has many useful applications, such as voice agents, telepresence, gaming, and speech-to-speech translation. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/techfest-demo-3d-photo-real-talking-head\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7726\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7726\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7725\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3-D Scanning with a Regular Camera\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7725\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7726\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>3-D television is creating a huge buzz in the consumer space, but the generation of 3-D content remains a largely professional endeavor. Our research demonstrates an easy-to-use system for creating photorealistic, 3-D-image-based models simply by walking around an object of interest with your phone, still camera, or video camera. The objects might be your custom car or motorcycle, a wedding cake or dress, a rare musical instrument, or a handcrafted artwork. Our system uses 3-D stereo matching techniques combined with image-based modeling and rendering to create a photorealistic model you can navigate simply by spinning it around on your screen, tablet, or mobile device.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7728\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7728\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7727\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tApplied Sciences Group: Smart Interactive Displays\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7727\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7728\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><strong>Steerable AutoStereo 3-D Display:<\/strong> We use a special, flat optical lens (Wedge) behind an LCD monitor to direct a narrow beam of light into each of a viewer\u2019s eyes. By using a Kinect head tracker, the user\u2019s relation to the display is tracked, and thereby, the prototype is able to steer that narrow beam to the user. The combination creates a 3-D image that is steered to the viewer without the need for glasses or holding your head in place.<\/p>\n<p><strong>Steerable Multiview Display:<\/strong> The same optical system used in the 3-D system, Wedge behind an LCD, is used to steer two separate images to two separate people rather than two separate eyes, as in the 3-D case. Using a Kinect head tracker, we find and track multiple viewers and send each viewer his or her own unique image. Therefore, two people can be looking at the same display but see two completely different images. If the two users switch positions, the same image continuously is steered toward them.<\/p>\n<p><strong>Retro-Reflective Air-Gesture Display:<\/strong> Sometimes, it\u2019s better to control with gestures than buttons. Using a retro-reflective screen and a camera close to the projector makes all objects cast a shadow, regardless of their color. This makes it easy to apply computer-vision algorithms to sense above-screen gestures that can be used for control, navigation, and many other applications.<\/p>\n<p><strong>A display that can see:<\/strong> Using the flat Wedge optic in camera mode behind a special, transparent organic-light-emitting-diode display, we can capture images that are both on and above the display. This enables touch and above-screen gesture interfaces, as well as telepresence applications.<\/p>\n<p><strong>Kinect based Virtual Window:<\/strong> Using Kinect, we track a user\u2019s position relative to a 3-D display to create the illusion of looking through a window. This view-dependent-rendered technique is used in both the Wedge 3-D and multiview demos, but the effect is much more apparent in this demo. The user quickly should realize the need for a multiview display, because this illusion is valid for only one user with a conventional display. This technique, along with the Wedge 3-D output and 3-D input techniques we are developing, are the basic building blocks for the ultimate telepresence display. This Magic Window is a bidirectional, light-field, interactive display that gives multiple users in a telepresence session the illusion that they are interacting with and talking to each other through a simple glass window. <a href=\"https:\/\/www.microsoft.com\/appliedsciences\/content\/projects.aspx\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7730\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7730\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7729\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tCloud Data Analytics from Excel\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7729\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7730\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Excel is an established data-collection and data-analysis tool in business, technical computing, and academic research. Excel offers an attractive user interface, easy-to-use data entry, and substantial interactivity for what-if analysis. But data in Excel is not readily discoverable and, hence, does not promote data sharing. Moreover, Excel does not offer scalable computation for large-scale analytics. Increasingly, researchers encounter a deluge of data, and when working in Excel, it is not easy to invoke analytics to explore data, find related data sets, or invoke external models. Our project shows how we seamlessly integrate cloud storage and scalable analytics into Excel through a research ribbon. Any analyst can use our tool to discover and import data from the cloud, invoke cloud-scale data analytics to extract information from large data sets, invoke models, and then store data in the cloud\u2014all through a spreadsheet with which they are already familiar. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/excel-datascope-overview\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7732\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7732\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7731\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tControlling Home Heating with Occupancy Prediction\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7731\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7732\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Home heating uses more energy than any other residential energy expenditure, making increasing the efficiency of home heating an important goal for saving money and protecting the environment. We have built a home-heating system, PreHeat, that automatically programs your thermostat based on when you are home. PreHeat\u2019s goal is to reduce the amount of time a household\u2019s thermostat needs to be on without compromising the comfort of household members. PreHeat builds a predictive model of when the house is occupied and uses the model to optimize when the house is heated, to save energy without sacrificing comfort. Our system consists of Wi-Fi and passive, IR-based occupancy sensors; temperature sensors; heating-system controllers for U.S. forced-air systems and for U.K. water-filled radiators and under-floor heating; and PC-based control software using machine learning to predict schedules based on current and past occupancy. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/preheat-controlling-home-heating-with-occupancy-prediction\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7734\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7734\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7733\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFace Recognition in Video\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7733\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7734\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Face recognition in video is an emerging technology that will have great impact on user experience in fields such as television, gaming, and communication. In the near future, a television or an Xbox will be able to recognize people in the living room, home video will be annotated automatically and become searchable, and TV watchers will be able to get information about an unfamiliar actor, athlete, or singer just by pointing to the person on the screen. Our research showcases the face-recognition technology developed by iLabs. Our technology includes novel algorithms in face detection, recognition, and tracking. The research demonstrates semi-automatic labeling of videos, a novel TV-watching experience using faces in a video as hyperlinks to get more information, and automatic recognition of the person in front of the television, Xbox, or computer.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7736\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7736\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7735\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFuzzy Contact Search for Windows Phone 7\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7735\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7736\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Mobile-phone users typically search for contacts in their contact list by keying in names or email IDs. Users frequently make various types of mistakes, including phonetic, transposition, deletion, and substitution errors, and, in the specific case of mobile phones, the nature of the input mechanism makes mistakes more probable. We propose a fuzzy-contact-search feature to help users find the right contacts despite making mistakes while keying in a query. The feature is based on the novel, hashing-based spelling-correction technology developed by Microsoft Research India. We support many languages, including English, French, German, Italian, Spanish, Portuguese, Polish, Dutch, Japanese, Russian, Arabic, Hebrew, Chinese, Korean, and Hindi. We have built a Windows Phone 7 app to demonstrate our fuzzy contact search. The solution is lightweight and can be used in any client-side contact-search scenario.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7738\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7738\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7737\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tHigh-Performance Cancer Screening\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7737\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7738\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Our research demonstrates high-performance, GPU-based 3-D rendering for colon-cancer screening. The VCViewer provides a gesture-based user interface for the navigation and analysis of 3-D images generated by computed-tomography (CT) scans for colon-cancer screening. This viewer is supported by a server-side volume-rendering engine implemented by Microsoft Research. Our work shows a real-world, life-saving medical application for this engine. In addition, we show high-performance, CPU-based image processing needed to prepare CT colonoscopy images for diagnostic viewing. This processing was developed at the 3-D Imaging Lab at Massachusetts General Hospital and has been adapted for task and data parallelism in joint collaboration with Microsoft Developer and Platform Evangelism, Microsoft Research, and Intel.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7740\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7740\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7739\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tInnerEye: Visual Recognition in the Hospital\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7739\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7740\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Our research shows how a single, underlying image-recognition algorithm can enable a multitude of clinical applications, such as semantic image navigation, multimodal image registration, quality control, content-based image search, and natural user interfaces for surgery being enabled within the Microsoft Amalga unified intelligence system. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/medical-image-analysis\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7742\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7742\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7741\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tInteractive Information Visualizations\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7741\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7742\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Our research presents novel, interactive visualizations to help people understand large amounts of data:<\/p>\n<ul>\n<li>iSketchVis applies the familiar, collaborative features of a whiteboard interface to the accurate data-exploration capabilities of computer-aided data visualization. It enables people to sketch charts and explore their data visually, on a pen-based tablet\u2014or collaboratively, on whiteboards.<\/li>\n<li>NetCharts enables people to analyze large data sets consisting of multiple entity types with multiple attributes. It uses simple charts to show aggregated data. People can explore these aggregates by dragging them out to create new charts.<\/li>\n<li>Sets traditionally are represented by Euler diagrams with bubble-like shapes. This research presents two techniques to simplify Euler diagrams. In addition, we demonstrate LineSets, which uses a single, continuous curve to represent sets. It simplifies set intersections and offers multiple interactions.<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7744\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7744\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7743\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMirageBlocks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7743\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7744\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Our research demonstrates the use of 3-D projection, combined with a Kinect depth camera to capture and display 3-D objects. Any physical object brought into the demo can be digitized instantaneously and viewed in 3-D. For example, we show a simple modeling application in which complex 3-D models can be constructed with just a few wooden blocks by digitizing and adding one block at a time. This setup also can be used in telepresence scenarios, in which what is real on your collaborator\u2019s table is virtual\u20143-D projected\u2014on yours, and vice versa. Our work shows how simulating real-world physics behaviors can be used to manipulate virtual 3-D objects. Our research uses a 3-D projector with active shutter glasses.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7746\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7746\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7745\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMobile Photography: Capture, Process, and View\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7745\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7746\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>The mobile phone is becoming the most popular consumer camera. While the benefits are quite clear, the mobile scenario presents several challenges. It is not always easy to capture good photos. Image-processing tools can improve photos after capture, but there are few tools tailored to on-phone image manipulation. We present phone-based image-enhancement tools that are tightly integrated with cloud services. Heavy computation is off-loaded to the cloud, which enables faster results without impacting the phone\u2019s performance.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7748\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7748\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7747\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tProject Emporia: Personalized News\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7747\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7748\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Project Emporia is a personalized news reader offering 250,000 articles daily as discovered through social news feeds. It combines state-of-the-art recommendation systems (Matchbox) with automatic content classification (ClickPredict) to enable users to fine-tune their news channels by category or a custom-keyword channel, combined with &#8220;more-like-this&#8221;\/&#8221;less-like-this&#8221; votes. It is available as a mobile client as well as on the web.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7750\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7750\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7749\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRecognizing Pen Grips for Natural UI\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7749\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7750\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>By enabling multitouch sensing on a digital pen, we can recognize how the user is holding it. In the real world, people hold tools such as pens, paintbrushes, sketching pencils, knives, and compasses differently, and we enable a user to alter the grip on a digital pen to switch between functionalities. This enables a natural UI on the pen\u2014mode switches are no longer necessary. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/recognizing-pen-grips-for-natural-user-interaction\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7752\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7752\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7751\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tRich Interactive Narratives\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7751\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7752\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Recent advances in visualization technologies have spawned a potent brew of visually rich applications that enable exploration over potentially large, complex data sets. Examples include GigaPan.org, Photosynth.net, PivotViewer, and WorldWide Telescope. At the same time, the narrative remains a dominant form for generating emotionally captivating content\u2014movies or novels\u2014or imparting complex knowledge, as in textbooks or journals. The Rich Interactive Narratives project aims to combine the compelling, time-tested narrative elements of multimedia storytelling with the information-rich, exploratory nature of the latest generation of information-visualization and -exploration technologies. We approach the problem not as a one-off application, Internet site, or proprietary framework, but rather as a data model that transcends a particular platform or technology. This has the potential of enabling entirely new ways for creating, transforming, augmenting, and presenting rich interactive content. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/rich-interactive-narratives\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7754\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7754\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7753\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tShadowDraw: Interactive Sketching Helper\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7753\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7754\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Do you want to be able to sketch or draw better? ShadowDraw is an interactive assistant for freehand drawing. It automatically recognizes what you\u2019re trying to draw and suggests new pen strokes for you to trace. As you draw new strokes, ShadowDraw refines its models in real time and provides new suggestions. ShadowDraw contains a large database of images with objects that a user might want to draw. The edges from any images that match the user\u2019s current drawing are merged and shown as suggested &#8220;shadow strokes.&#8221; The user then can trace these strokes to improve the drawing. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/shadowdraw-real-time-user-guidance-for-freehand-drawing\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7756\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7756\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7755\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tSocial News Search for Companies\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7755\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7756\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p>Social News Search for Companies uses social public data to build a great news portal for companies. The curation of this page can be crowdsourced to improve the quality of results. We tackle two questions: How can we use social media to provide a rich, topical, searchable, living news dashboard for any given company, and can we build an environment where the curation of the sources of content for a company page is done by the users of the page rather than by an editor? <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/future-social-experiences-fuse-labs\/\">Learn more&#8230;<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Event Videos\"} --><!-- wp:freeform --><h2>Watch the TechFest 2011 Videos<\/h2>\n<p>\t<div data-wp-context='{\"items\":[]}' data-wp-interactive=\"msr\/accordion\">\n\t\t\t\t\t<div class=\"clearfix\">\n\t\t\t\t<div\n\t\t\t\t\tclass=\"btn-group align-items-center mb-g float-sm-right\"\n\t\t\t\t\tdata-bi-aN=\"accordion-collapse-controls\"\n\t\t\t\t>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Expand all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllExpanded\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onExpandAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tExpand all\t\t\t\t\t<\/button>\n\t\t\t\t\t<span aria-hidden=\"true\"> | <\/span>\n\t\t\t\t\t<button\n\t\t\t\t\t\tclass=\"btn btn-link m-0\"\n\t\t\t\t\t\tdata-bi-cN=\"Collapse all\"\n\t\t\t\t\t\tdata-wp-bind--aria-controls=\"state.ariaControls\"\n\t\t\t\t\t\tdata-wp-bind--aria-expanded=\"state.ariaExpanded\"\n\t\t\t\t\t\tdata-wp-bind--disabled=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-class--inactive=\"state.isAllCollapsed\"\n\t\t\t\t\t\tdata-wp-on--click=\"actions.onCollapseAll\"\n\t\t\t\t\t\ttype=\"button\"\n\t\t\t\t\t>\n\t\t\t\t\t\tCollapse all\t\t\t\t\t<\/button>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t\t\t<ul class=\"msr-accordion\">\n\t\t\t\t\t\t\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7758\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7758\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7757\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3-D Scanning with a regular camera or phone!\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7757\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7758\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/channel9.msdn.com\/posts\/TechFest-2011-3D-Scanning-with-a-regular-camera-or-phone\">Watch video<\/a><\/p>\n<p>3-D television is creating a huge buzz in the consumer space, but the generation of 3-D content remains a largely professional endeavor. Our research demonstrates an easy-to-use system for creating photorealistic, 3-D-image-based models simply by walking around an object of interest with your phone, still camera, or video camera. The objects might be your custom car or motorcycle, a wedding cake or dress, a rare musical instrument, or a hand-crafted artwork. Our system uses 3-D stereo matching techniques combined with image-based modeling and rendering to create a photorealistic model you can navigate simply by spinning it around on your screen, tablet, or mobile device.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7760\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7760\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7759\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\t3-D, Photo-Real Talking Head\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7759\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7760\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/3-d-photo-real-talking-head\/\">Watch video<\/a><\/p>\n<p>Dynamic texture mapping helps bypass the difficulties in rendering soft tissues like lips, tongue, eyes, and wrinkles, moving us one step closer to being able to create a more realistic personal avatar.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7762\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7762\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7761\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tApplied Sciences Group: Smart Interactive Displays\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7761\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7762\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/applied-sciences-group-smart-interactive-displays\/\">Watch video<\/a><\/p>\n<p>Steven Bathiche, Director, Microsoft Applied Sciences, shares his team&#8217;s latest work on the next generation of Smart Interactive Displays.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7764\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7764\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7763\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tFacial Recognition in Videos\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7763\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7764\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/channel9.msdn.com\/posts\/TechFest-2011-Facial-Recognition-in-Videos\">Watch video<\/a><\/p>\n<p>Face recognition in video is an emerging technology that will have great impact on user experience in fields such as television, gaming, and communication. In the near future, a television or an Xbox will be able to recognize people in the living room, home video will be annotated automatically and become searchable, and TV watchers will be able to get information about an unfamiliar actor, athlete, or singer just by pointing to the person on the screen. Our research showcases the face-recognition technology developed by Innovation Labs. Our technology includes novel algorithms in face detection, recognition, and tracking. The research demonstrates semi-automatic labeling of videos, a novel TV-watching experience using faces in a video as hyperlinks to get more information, and automatic recognition of the person in front of the television, Xbox, or computer.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7766\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7766\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7765\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tHigh-Performance Cancer Screening\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7765\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7766\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/high-performance-cancer-screening\/\">Watch video<\/a><\/p>\n<p>See how a high\u2013performance, 3-D rendering engine can be transformed into a real-world, life-saving medical application.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7768\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7768\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7767\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tInnerEye: Visual Recognition in the Hospital\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7767\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7768\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/innereye-visual-recognition-in-the-hospital\/\">Watch video<\/a><\/p>\n<p>InnerEye focuses on the analysis of patient scans using machine learning techniques for automatic detection and segmentation of healthy anatomy as well as anomalies.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7770\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7770\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7769\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMirageBlocks\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7769\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7770\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/miragetable-freehand-interaction-on-a-projected-augmented-reality-tabletop\/\">Watch video<\/a><\/p>\n<p>See how simulating real-world physics behaviors can be used to manipulate virtual 3-D objects using 3-D projection and a Kinect depth camera.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7772\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7772\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7771\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tMobile Photography-Capture, process and View\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7771\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7772\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/channel9.msdn.com\/posts\/TechFest-2011-Mobile-Photography-Capture-process-and-View\">Watch video<\/a><\/p>\n<p>The mobile phone is becoming the most popular consumer camera. While the benefits are quite clear, the mobile scenario presents several challenges. It is not always easy to capture good photos. Image-processing tools can improve photos after capture, but there are few tools tailored to on-phone image manipulation. We present phone-based image enhancement tools that are tightly integrated with cloud services. Heavy computation is off-loaded to the cloud, which enables faster results without impacting the phone\u2019s performance.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7774\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7774\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7773\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tShadowDraw\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7773\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7774\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/shadowdraw-real-time-user-guidance-for-freehand-drawing\/\">Watch video<\/a><\/p>\n<p>An object-oriented research project delivers an interactive assistant for freehand drawing by recognizing what you\u2019re trying to draw and suggesting traceable pen strokes to improve your drawing.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t<li class=\"m-0\" data-wp-context='{\"id\":\"accordion-content-7776\"}' data-wp-init=\"callbacks.init\">\n\t\t<div class=\"accordion-header\">\n\t\t\t<button\n\t\t\t\taria-controls=\"accordion-content-7776\"\n\t\t\t\tclass=\"btn btn-collapse\"\n\t\t\t\tdata-wp-bind--aria-expanded=\"state.isExpanded\"\n\t\t\t\tdata-wp-on--click=\"actions.onClick\"\n\t\t\t\tid=\"accordion-button-7775\"\n\t\t\t\ttype=\"button\"\n\t\t\t>\n\t\t\t\tA montage of impact made by Microsoft Research\t\t\t<\/button>\n\t\t<\/div>\n\t\t<div\n\t\t\taria-labelledby=\"accordion-button-7775\"\n\t\t\tclass=\"msr-accordion__content\"\n\t\t\tdata-wp-bind--inert=\"!state.isExpanded\"\n\t\t\tdata-wp-run=\"callbacks.run\"\n\t\t\tid=\"accordion-content-7776\"\n\t\t>\n\t\t\t<div class=\"msr-accordion__body\">\n\t\t\t\t<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/a-montage-of-impact-made-by-microsoft-research\/\">Watch video<\/a><\/p>\n<p>Nearly every product that Microsoft ships includes technology from Microsoft Research. Through exploration and collaboration with product groups and academic institutions, Microsoft Research advances the state of the art of computing.<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/li>\n\t\t\t\t\t\t<\/ul>\n\t<\/div>\n\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- \/wp:msr\/content-tabs -->","tab-content":[{"id":0,"name":"Summary","content":"TechFest is an annual event, for Microsoft employees and guests, that showcases the most exciting research from Microsoft Research's locations around the world.\u00a0 Researchers share their latest work\u2014and the technologies emerging from those efforts.\u00a0 The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.\r\n\r\nWe invite you to explore the projects and\u00a0watch the videos.\u00a0 Immerse yourself in TechFest content and see how today\u2019s future will become tomorrow\u2019s reality.\r\n\r\n[accordion]\r\n\r\n[panel header=\"Feature Story\"]\r\n<h2>TechFest Focus: Natural User Interfaces<\/h2>\r\nBy Douglas Gantenbein\u00a0| March 8, 2011 9:00 AM PT\r\n\r\nFor many people, using a computer still means using a keyboard and a mouse. But computers are becoming more like \u201cus\u201d\u2014better able to anticipate human needs, work with human preferences, even work on our behalf.\r\n\r\n<img class=\"size-full wp-image-313739 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/TechFest2011_Nui.jpg\" alt=\"techfest2011_nui\" width=\"238\" height=\"69\" \/>Computers, in short, are moving rapidly toward widespread adoption of natural user interfaces (NUIs)\u2014interfaces that are more intuitive, that are easier to use, and that adapt to human habits and wishes, rather than forcing humans to adapt to computers. Microsoft has been a driving force behind the adoption of NUI technology. The wildly successful <a href=\"http:\/\/www.xbox.com\/en-US\/Kinect\">Kinect for Xbox 360<\/a> device\u2014launched in November 2010\u2014is a perfect example. It recognizes users, needs no controller to work, and understands what the user wants to do.\r\n\r\nIt won\u2019t be long before more and more devices work in similar fashion. Microsoft Research is working closely with Microsoft business units to develop new products that take advantage of NUI technology. In the months and years to come, a growing number of Microsoft products will recognize voices and gestures, read facial expressions, and make computing easier, more intuitive, and more productive.\r\n\r\nTechFest 2011, Microsoft Research\u2019s annual showcase of forward-looking computer-science technology, will feature several projects that show how the move toward NUIs is progressing. On March 9 and 10, thousands of Microsoft employees will have a chance to view the research on display, talk with the researchers involved, and seek ways to incorporate that work into new products that could be used by millions of people worldwide.\r\n\r\nNot all the TechFest projects are NUI-related, of course. Microsoft Research investigates the possibilities in dozens of computer-science areas. But quite a few of the demos to be shown do shine a light on natural user interfaces, and each points to a new way to see or interact with the world. One demo shows how patients\u2019 medical images can be interpreted automatically, enhancing considerably the efficiency of a physician\u2019s work. One literally creates a new world\u2014instantly converting real objects into digital 3-D objects that can be manipulated by a real human hand. A third acts as a virtual drawing coach to would-be artists. And yet another enables a simple digital stylus to understand whether a person wants to draw with it, paint with it, or, perhaps, even play it like a saxophone.\r\n<h2>Semantic Understanding of Medical Images<\/h2>\r\nHealthcare professionals today are overwhelmed with the amount of medical imagery. X-rays, MRIs, CT, ultrasound, PET scans\u2014all are growing more common as diagnostic tools.\r\n\r\n<img class=\"size-medium wp-image-313724 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/carotids-300x295.jpg\" alt=\"carotids\" width=\"300\" height=\"295\" \/>But the sheer volume of these images also makes it more difficult to read and understand them in a timely fashion. To help make medical images easier to read and analyze, a team from <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-cambridge\/\">Microsoft Research Cambridge<\/a> has created <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/medical-image-analysis\/\">InnerEye<\/a>, a research project that uses the latest machine-learning techniques to speed image interpretation and improve diagnostic accuracy. InnerEye also has implications for improved treatments, such as enabling radiation oncologists to target treatment to tumors more precisely in sensitive areas such as the brain.\r\n\r\nIn the case of radiation therapy, it can take hours for a radiation oncologist to outline the edge of tumors and healthy organs to be protected. InnerEye\u2014developed by researcher <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/antcrim\/\">Antonio Criminisi<\/a> and a team of colleagues that included Andrew Blake, Ender Konukoglu, Ben Glocker, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/asellen\/\">Abigail Sellen<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tsharp\/\">Toby Sharp<\/a>, and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a>\u2014greatly reduces the time needed to delineate accurately the boundaries of anatomical structures of interest in 3-D.\r\n\r\nTo use InnerEye, a radiologist or clinician uses a computer pointer on a screen image of a medical scan to highlight a part of the body that requires treatment. InnerEye then employs algorithms developed by Criminisi and his colleagues to accurately define the 3-D surface of the selected organ. In the resulting image, the highlighted organ\u2014a kidney, for instance, or even a complete aorta\u2014seems to almost leap from the rest of the image. The organ delineation offers a quick way of assessing things such as organ volume, tissue density, and other information that aids diagnosis.\r\n\r\nInnerEye also enables extremely fast, intuitive visual navigation and inspection of 3-D images. A physician can navigate to an optimized view of the heart simply by clicking on the word \u201cheart,\u201d because the system already knows where each organ is. This yields considerable time savings, with big economic implications.\r\n\r\nThe InnerEye project team also is investigating the use of Kinect in the operating theater. Surgeons often wish to view a patient\u2019s previously acquired CT or MR scans, but touching a mouse or keyboard could introduce germs. The InnerEye technology and Kinect help by automatically interpreting the surgeon\u2019s hand gestures. This enables the surgeon to navigate naturally through the patient\u2019s images.\r\n\r\nInnerEye has numerous potential applications in health care. Its automatic image analysis promises to make the work of surgeons, radiologists, and clinicians much more efficient\u2014and, possibly, more accurate. In cancer treatment, InnerEye could be used to evaluate a tumor quickly and compare it in size and shape with earlier images. The technology also could be used to help assess the number and location of brain lesions caused by multiple sclerosis.\r\n<h2>Blurring the Line Between the Real and the Virtual<\/h2>\r\nBreaking down the barrier between the real world and the virtual world is a staple of science fiction\u2014Avatar and The Matrix are but two recent examples. But technology is coming closer to actually blurring the line.\r\n\r\n<img class=\"size-medium wp-image-313727 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/mirage_blocks-297x300.jpg\" alt=\"mirage_blocks\" width=\"297\" height=\"300\" \/><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-redmond\/\">Microsoft Research Redmond<\/a> researcher <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/benko\/\">Hrvoje Benko<\/a> and senior researcher <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awilson\/\">Andy Wilson<\/a> have taken a step toward making the virtual real with a project called MirageBlocks. Its aim is to simplify the process of digitally capturing images of everyday objects and to convert them instantaneously to 3-D images. The goal is to create a virtual mirror of the physical world, one so readily understood that a MirageBlocks user could take an image of a brick and use it to create a virtual castle\u2014brick by brick.\r\n\r\nCapturing and visualizing objects in 3-D long has fascinated scientists, but new technology makes it more feasible. In particular, Kinect for Xbox 360 gave Benko and Wilson\u2014and intern Ricardo Jota\u2014an easy-to-use, $150 gadget that easily could capture the depth of an object with its multicamera design. Coupled with new-generation 3-D projectors and 3-D glasses, Kinect helps make MirageBlocks perhaps the most advanced tool ever for capturing and manipulating 3-D imagery.\r\n\r\nThe MirageBlocks environment consists of a Kinect device, an Acer H5360 3-D projector, and Nvidia 3D Vision glasses synchronized to the projector\u2019s frame rate. The Kinect captures the object image and tracks the user\u2019s head position so that the virtual image is shown to the user with the correct perspective.\r\n\r\nUsers enter MirageBlocks\u2019 virtual world by placing an object on a table top, where it is captured by the Kinect\u2019s cameras. The object is instantly digitized and projected back into the workspace as a 3-D virtual image. The user then can move or rotate the virtual object using an actual hand or a numbered keypad. A user can take duplicate objects, or different objects, to construct a virtual 3-D model. To the user, the virtual objects have the same depth and size as their physical counterparts.\r\n\r\nMirageBlocks has several real-world applications. It could apply an entirely new dimension to simulation games, enabling game players to create custom models or devices from a few digitized pieces or to digitize any object and place it in a virtual game. MirageBlocks\u2019 technology could change online shopping, enabling the projection of 3-D representations of an object. It could transform teleconferencing, enabling participants to examine and manipulate 3-D representations of products or prototypes. It might even be useful in health care\u2014an emergency-room physician, for instance, could use a 3-D image of a limb with a broken bone to correctly align the break.\r\n<h2>Giving the Artistically Challenged a Helping Hand<\/h2>\r\nIt\u2019s fair to say that most people cannot draw well. But what if a computer could help by suggesting to the would-be artist certain lines to follow or shapes to create? That\u2019s the idea behind ShadowDraw, created by Larry Zitnick\u2014who works as a researcher in the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/interactive-visual-media\/\">Interactive Visual Media Group<\/a> at <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-redmond\/\">Microsoft Research Redmond<\/a>\u2014and principal researcher Michael Cohen, with help from intern Yong Jae Lee from the University of Texas at Austin.\r\n\r\n<img class=\"size-medium wp-image-313733 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/teasers-300x300.jpg\" alt=\"teasers\" width=\"300\" height=\"300\" \/>In concept, ShadowDraw seems disarmingly simple. A user begins drawing an object\u2014a bicycle, for instance, or a face\u2014using a stylus-based Cintiq 21UX tablet. As the drawing progresses, ShadowDraw surmises the subject of the emerging drawing and begins to suggest refinements by generating a \u201cshadow\u201d behind the would-be artist\u2019s lines that resembles the drawn object. By taking advantage of ShadowDraw\u2019s suggestions, the user can create a more refined drawing than otherwise possible, while retaining the individuality of their pencil strokes and overall technique.\r\n\r\nThe seeming simplicity of ShadowDraw, though, belies the substantial computing power being harnessed behind the screen. ShadowDraw is, at its heart, a database of 30,000 images culled from the Internet and other public sources. Edges are extracted from these original photographic images to provide stroke suggestions to the user.\r\n\r\nThe main component created by the Microsoft Research team is an interactive drawing system that reacts to the user\u2019s pencil work in real time. ShadowDraw uses a novel, partial-matching approach that finds possible matches between different sub-sections of the user\u2019s drawing and the database of edge images. Think of ShadowDraw\u2019s behind-the-screen interface as a checkerboard\u2014each square where a user draws a line will generate its own set of possible matches that cumulatively vote on suggestions to help refine a user\u2019s work. The researchers also created a novel method for spatially blending the various stroke suggestions for the drawing.\r\n\r\nTo test ShadowDraw, Zitnick and his co-researchers enlisted eight men and eight women. Each was asked to draw five subjects\u2014a shoe, a bicycle, a butterfly, a face, and a rabbit\u2014with and without ShadowDraw. The rabbit image was a control\u2014there were no rabbits in the database. When using ShadowDraw, the subjects were told they could use the suggested renderings or ignore them. And each subject was given 30 minutes to complete 10 drawings.\r\n\r\nA panel of eight additional subjects judged the drawings on a scale of one to five, with one representing \u201cpoor\u201d and five \u201cgood.\u201d The panelists found that ShadowDraw was of significant help to people with average drawing skills\u2014their drawings were significantly improved by ShadowDraw. Interestingly, the subjects rated as having poor or good drawing skills, pre-ShadowDraw, saw little improvement. Zitnick says the poor artists were so bad that ShadowDraw couldn\u2019t even guess what they were attempting to draw. The good artists already had sufficient skills to draw the test objects accurately.\r\n<h2>Enabling One Pen to Simulate Many<\/h2>\r\nHuman beings have developed dozens of ways to render images on a piece of paper, a canvas, or another drawing surface. Pens, pencils, paintbrushes, crayons, and more\u2014all can be used to create images or the written word.\r\n\r\n<img class=\"size-medium wp-image-313730 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2011\/01\/pen_hardware-300x155.jpg\" alt=\"pen_hardware\" width=\"300\" height=\"155\" \/>Each, however, is held in a slightly different way. That can seem natural when using the device itself\u2014people learn to manage a paintbrush in a way different from how they use a pen or a pencil. But those differences can present a challenge when attempting to work with a computer. A single digital stylus or pen can serve many functions, but to do so typically requires the user to hold the stylus in the same manner, regardless of the tool the stylus is mimicking.\r\n\r\nA Microsoft Research team aimed to find a better way to design a computer stylus. The team\u2014which included researcher Xiang Cao in the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/human-computer-interaction-msra\/\">Human-Computer Interaction Group<\/a> at <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/microsoft-research-asia\/\">Microsoft Research Asia<\/a>; Shahram Izadi of Microsoft Research Cambridge; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/benko\/\">Benko<\/a> and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kenh\/\">Ken Hinckley<\/a>of Microsoft Research Redmond; Minghi Sun, a Microsoft Research Cambridge intern; Hyunyoung Song of the University of Maryland; and Fran\u00e7ois Guimbreti\u00e8re of Cornell University\u2014asked the question: How can a digital pen or stylus be as natural to use as the varied physical tools people employ? The solution, to be shown as part of a demo called Recognizing Pen Grips for Natural UI: A digital pen enhanced with a capacitive, multitouch sensor that knows where the user\u2019s hand touches the pen and an orientation sensor that knows at what angle the pen is held.\r\n\r\nWith that information, the digital pen can recognize different grips and automatically behave like the desired tool. If a user holds the digital pen like a paintbrush, the pen automatically behaves like a paintbrush. Hold it like a pen, it behaves like a pen\u2013with no need to manually turn a switch on the device or choose a different stylus mode.\r\n\r\nThe implications of the technology are considerable. Musical instruments such as flutes or saxophones and many other objects all build on similar shapes. A digital stylus with grip and orientation sensors conceivably could duplicate all, while enabling the user to hold the stylus in the manner that is most natural. Even game controllers could be adapted to modify their behavior depending on how they are held, whether as a driving device for auto-based games or as a weapon in games such as <a href=\"http:\/\/halo.xbox.com\/en-us\">Halo<\/a>.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"What is TechFest?\"]\r\n\r\n<strong><img class=\"size-full wp-image-201848 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-crowdthumbnail3.jpg\" alt=\"crowdthumbnail3.jpg\" width=\"164\" height=\"200\" \/>The latest thinking.\u00a0\u00a0The freshest ideas<\/strong>.\r\n\r\nTechFest is an annual event, for Microsoft employees and guests,\u00a0that showcases the most exciting research from Microsoft Research's <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/about\/\">locations<\/a> around the world.\u00a0 Researchers\u00a0share their latest work\u2014and the technologies emerging from those efforts.\u00a0 The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.\r\n\r\nWe invite you to explore the projects, watch the videos, follow the buzz, and join the discussion on <a href=\"http:\/\/www.facebook.com\/microsoftresearch\" target=\"_self\">Facebook<\/a> and <a href=\"http:\/\/x.com\/msftresearch\" target=\"_self\">Twitter<\/a>.\u00a0 Immerse yourself in TechFest content and see how today\u2019s future will become tomorrow\u2019s reality.\r\n\r\n<img class=\"alignnone size-full wp-image-201847\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-crowd2.jpg\" alt=\"crowd2.jpg\" width=\"525\" height=\"285\" \/>\r\n<h3>In the News<\/h3>\r\n<ul type=\"disc\">\r\n \t<li><a href=\"http:\/\/www.technologyreview.com\/computing\/35076\/page1\/\" target=\"_self\">A search engine for the human body<\/a><\/li>\r\n \t<li><a href=\"http:\/\/seattletimes.nwsource.com\/html\/businesstechnology\/2014437658_techfest09.html\" target=\"_self\">Microsoft's TechFest shows the distant, and near, future<\/a><\/li>\r\n \t<li><a href=\"http:\/\/www.geekwire.com\/2011\/microsoft-research-project-aims-artistically-challenged#utm_source=feedburner&amp;utm_medium=twitter&amp;utm_campaign=feed:+geekwire+(geekwire)&amp;utm_content=twitter\" target=\"_self\">Microsoft Research aims to help the \u2018artistically challenged\u2019<\/a><\/li>\r\n \t<li><a href=\"http:\/\/www.zdnet.com\/blog\/microsoft\/at-microsoft-nui-goes-beyond-fun-and-games\/8874\" target=\"_self\">At Microsoft, NUI goes beyond fun and games<\/a><\/li>\r\n<\/ul>\r\n<h3>Discover More<\/h3>\r\n<ul type=\"disc\">\r\n \t<li><a href=\"http:\/\/montagepages.fuselabs.com\/public\/RobertMao\/TechFest2011\/e4b5f0c8-f1f6-460c-b83e-0f80f1d87599.htm\" target=\"_self\">Experience the TechFest 2011 Visual Album with Montage<\/a><\/li>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/projecting-the-future-of-interaction\/\" target=\"_self\">About Microsoft Research<\/a><\/li>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/event\/techfest-2010\/\" target=\"_self\">TechFest 2010<\/a><\/li>\r\n<\/ul>\r\n<img class=\"alignnone wp-image-201846\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/en-us-events-techfest2011-combinedsigns-300x235.png\" alt=\"combinedsigns.png\" width=\"245\" height=\"192\" \/>\r\n\r\n[\/panel]\r\n\r\n[\/accordion]"},{"id":1,"name":"Projects","content":"[accordion]\r\n\r\n[panel header=\"3-D, Photo-Real Talking Head\"]\r\n\r\nOur research showcases a new, 3-D, photo-real talking head with freely controlled head motions and facial expressions. It extends our prior, high-quality, 2-D, photo-real talking head to 3-D. First, we apply a 2-D-to-3-D reconstruction algorithm frame by frame on a 2-D video to construct a 3-D training database. In training, super-feature vectors consisting of 3-D geometry, texture, and speech are formed to train a statistical, multistreamed, Hidden Markov Model (HMM). The HMM then is used to synthesize both the trajectories of geometric animation and dynamic texture. The 3-D talking head can be animated by the geometric trajectory, while the facial expressions and articulator movements are rendered with dynamic texture sequences. Head motions and facial expression also can be separately controlled by manipulating corresponding parameters. The new 3-D talking head has many useful applications, such as voice agents, telepresence, gaming, and speech-to-speech translation. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/techfest-demo-3d-photo-real-talking-head\/\">Learn more...<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"3-D Scanning with a Regular Camera\"]\r\n\r\n3-D television is creating a huge buzz in the consumer space, but the generation of 3-D content remains a largely professional endeavor. Our research demonstrates an easy-to-use system for creating photorealistic, 3-D-image-based models simply by walking around an object of interest with your phone, still camera, or video camera. The objects might be your custom car or motorcycle, a wedding cake or dress, a rare musical instrument, or a handcrafted artwork. Our system uses 3-D stereo matching techniques combined with image-based modeling and rendering to create a photorealistic model you can navigate simply by spinning it around on your screen, tablet, or mobile device.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Applied Sciences Group: Smart Interactive Displays\"]\r\n\r\n<strong>Steerable AutoStereo 3-D Display:<\/strong> We use a special, flat optical lens (Wedge) behind an LCD monitor to direct a narrow beam of light into each of a viewer\u2019s eyes. By using a Kinect head tracker, the user\u2019s relation to the display is tracked, and thereby, the prototype is able to steer that narrow beam to the user. The combination creates a 3-D image that is steered to the viewer without the need for glasses or holding your head in place.\r\n\r\n<strong>Steerable Multiview Display:<\/strong> The same optical system used in the 3-D system, Wedge behind an LCD, is used to steer two separate images to two separate people rather than two separate eyes, as in the 3-D case. Using a Kinect head tracker, we find and track multiple viewers and send each viewer his or her own unique image. Therefore, two people can be looking at the same display but see two completely different images. If the two users switch positions, the same image continuously is steered toward them.\r\n\r\n<strong>Retro-Reflective Air-Gesture Display:<\/strong> Sometimes, it\u2019s better to control with gestures than buttons. Using a retro-reflective screen and a camera close to the projector makes all objects cast a shadow, regardless of their color. This makes it easy to apply computer-vision algorithms to sense above-screen gestures that can be used for control, navigation, and many other applications.\r\n\r\n<strong>A display that can see:<\/strong> Using the flat Wedge optic in camera mode behind a special, transparent organic-light-emitting-diode display, we can capture images that are both on and above the display. This enables touch and above-screen gesture interfaces, as well as telepresence applications.\r\n\r\n<strong>Kinect based Virtual Window:<\/strong> Using Kinect, we track a user\u2019s position relative to a 3-D display to create the illusion of looking through a window. This view-dependent-rendered technique is used in both the Wedge 3-D and multiview demos, but the effect is much more apparent in this demo. The user quickly should realize the need for a multiview display, because this illusion is valid for only one user with a conventional display. This technique, along with the Wedge 3-D output and 3-D input techniques we are developing, are the basic building blocks for the ultimate telepresence display. This Magic Window is a bidirectional, light-field, interactive display that gives multiple users in a telepresence session the illusion that they are interacting with and talking to each other through a simple glass window. <a href=\"https:\/\/www.microsoft.com\/appliedsciences\/content\/projects.aspx\">Learn more...<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Cloud Data Analytics from Excel\"]\r\n\r\nExcel is an established data-collection and data-analysis tool in business, technical computing, and academic research. Excel offers an attractive user interface, easy-to-use data entry, and substantial interactivity for what-if analysis. But data in Excel is not readily discoverable and, hence, does not promote data sharing. Moreover, Excel does not offer scalable computation for large-scale analytics. Increasingly, researchers encounter a deluge of data, and when working in Excel, it is not easy to invoke analytics to explore data, find related data sets, or invoke external models. Our project shows how we seamlessly integrate cloud storage and scalable analytics into Excel through a research ribbon. Any analyst can use our tool to discover and import data from the cloud, invoke cloud-scale data analytics to extract information from large data sets, invoke models, and then store data in the cloud\u2014all through a spreadsheet with which they are already familiar. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/excel-datascope-overview\/\">Learn more...<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Controlling Home Heating with Occupancy Prediction\"]\r\n\r\nHome heating uses more energy than any other residential energy expenditure, making increasing the efficiency of home heating an important goal for saving money and protecting the environment. We have built a home-heating system, PreHeat, that automatically programs your thermostat based on when you are home. PreHeat\u2019s goal is to reduce the amount of time a household\u2019s thermostat needs to be on without compromising the comfort of household members. PreHeat builds a predictive model of when the house is occupied and uses the model to optimize when the house is heated, to save energy without sacrificing comfort. Our system consists of Wi-Fi and passive, IR-based occupancy sensors; temperature sensors; heating-system controllers for U.S. forced-air systems and for U.K. water-filled radiators and under-floor heating; and PC-based control software using machine learning to predict schedules based on current and past occupancy. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/preheat-controlling-home-heating-with-occupancy-prediction\/\">Learn more...<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Face Recognition in Video\"]\r\n\r\nFace recognition in video is an emerging technology that will have great impact on user experience in fields such as television, gaming, and communication. In the near future, a television or an Xbox will be able to recognize people in the living room, home video will be annotated automatically and become searchable, and TV watchers will be able to get information about an unfamiliar actor, athlete, or singer just by pointing to the person on the screen. Our research showcases the face-recognition technology developed by iLabs. Our technology includes novel algorithms in face detection, recognition, and tracking. The research demonstrates semi-automatic labeling of videos, a novel TV-watching experience using faces in a video as hyperlinks to get more information, and automatic recognition of the person in front of the television, Xbox, or computer.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Fuzzy Contact Search for Windows Phone 7\"]\r\n\r\nMobile-phone users typically search for contacts in their contact list by keying in names or email IDs. Users frequently make various types of mistakes, including phonetic, transposition, deletion, and substitution errors, and, in the specific case of mobile phones, the nature of the input mechanism makes mistakes more probable. We propose a fuzzy-contact-search feature to help users find the right contacts despite making mistakes while keying in a query. The feature is based on the novel, hashing-based spelling-correction technology developed by Microsoft Research India. We support many languages, including English, French, German, Italian, Spanish, Portuguese, Polish, Dutch, Japanese, Russian, Arabic, Hebrew, Chinese, Korean, and Hindi. We have built a Windows Phone 7 app to demonstrate our fuzzy contact search. The solution is lightweight and can be used in any client-side contact-search scenario.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"High-Performance Cancer Screening\"]\r\n\r\nOur research demonstrates high-performance, GPU-based 3-D rendering for colon-cancer screening. The VCViewer provides a gesture-based user interface for the navigation and analysis of 3-D images generated by computed-tomography (CT) scans for colon-cancer screening. This viewer is supported by a server-side volume-rendering engine implemented by Microsoft Research. Our work shows a real-world, life-saving medical application for this engine. In addition, we show high-performance, CPU-based image processing needed to prepare CT colonoscopy images for diagnostic viewing. This processing was developed at the 3-D Imaging Lab at Massachusetts General Hospital and has been adapted for task and data parallelism in joint collaboration with Microsoft Developer and Platform Evangelism, Microsoft Research, and Intel.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"InnerEye: Visual Recognition in the Hospital\"]\r\n\r\nOur research shows how a single, underlying image-recognition algorithm can enable a multitude of clinical applications, such as semantic image navigation, multimodal image registration, quality control, content-based image search, and natural user interfaces for surgery being enabled within the Microsoft Amalga unified intelligence system. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/medical-image-analysis\/\">Learn more...<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Interactive Information Visualizations\"]\r\n\r\nOur research presents novel, interactive visualizations to help people understand large amounts of data:\r\n<ul>\r\n \t<li>iSketchVis applies the familiar, collaborative features of a whiteboard interface to the accurate data-exploration capabilities of computer-aided data visualization. It enables people to sketch charts and explore their data visually, on a pen-based tablet\u2014or collaboratively, on whiteboards.<\/li>\r\n \t<li>NetCharts enables people to analyze large data sets consisting of multiple entity types with multiple attributes. It uses simple charts to show aggregated data. People can explore these aggregates by dragging them out to create new charts.<\/li>\r\n \t<li>Sets traditionally are represented by Euler diagrams with bubble-like shapes. This research presents two techniques to simplify Euler diagrams. In addition, we demonstrate LineSets, which uses a single, continuous curve to represent sets. It simplifies set intersections and offers multiple interactions.<\/li>\r\n<\/ul>\r\n[\/panel]\r\n\r\n[panel header=\"MirageBlocks\"]\r\n\r\nOur research demonstrates the use of 3-D projection, combined with a Kinect depth camera to capture and display 3-D objects. Any physical object brought into the demo can be digitized instantaneously and viewed in 3-D. For example, we show a simple modeling application in which complex 3-D models can be constructed with just a few wooden blocks by digitizing and adding one block at a time. This setup also can be used in telepresence scenarios, in which what is real on your collaborator\u2019s table is virtual\u20143-D projected\u2014on yours, and vice versa. Our work shows how simulating real-world physics behaviors can be used to manipulate virtual 3-D objects. Our research uses a 3-D projector with active shutter glasses.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Mobile Photography: Capture, Process, and View\"]\r\n\r\nThe mobile phone is becoming the most popular consumer camera. While the benefits are quite clear, the mobile scenario presents several challenges. It is not always easy to capture good photos. Image-processing tools can improve photos after capture, but there are few tools tailored to on-phone image manipulation. We present phone-based image-enhancement tools that are tightly integrated with cloud services. Heavy computation is off-loaded to the cloud, which enables faster results without impacting the phone\u2019s performance.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Project Emporia: Personalized News\"]\r\n\r\nProject Emporia is a personalized news reader offering 250,000 articles daily as discovered through social news feeds. It combines state-of-the-art recommendation systems (Matchbox) with automatic content classification (ClickPredict) to enable users to fine-tune their news channels by category or a custom-keyword channel, combined with \"more-like-this\"\/\"less-like-this\" votes. It is available as a mobile client as well as on the web.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Recognizing Pen Grips for Natural UI\"]\r\n\r\nBy enabling multitouch sensing on a digital pen, we can recognize how the user is holding it. In the real world, people hold tools such as pens, paintbrushes, sketching pencils, knives, and compasses differently, and we enable a user to alter the grip on a digital pen to switch between functionalities. This enables a natural UI on the pen\u2014mode switches are no longer necessary. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/recognizing-pen-grips-for-natural-user-interaction\/\">Learn more...<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Rich Interactive Narratives\"]\r\n\r\nRecent advances in visualization technologies have spawned a potent brew of visually rich applications that enable exploration over potentially large, complex data sets. Examples include GigaPan.org, Photosynth.net, PivotViewer, and WorldWide Telescope. At the same time, the narrative remains a dominant form for generating emotionally captivating content\u2014movies or novels\u2014or imparting complex knowledge, as in textbooks or journals. The Rich Interactive Narratives project aims to combine the compelling, time-tested narrative elements of multimedia storytelling with the information-rich, exploratory nature of the latest generation of information-visualization and -exploration technologies. We approach the problem not as a one-off application, Internet site, or proprietary framework, but rather as a data model that transcends a particular platform or technology. This has the potential of enabling entirely new ways for creating, transforming, augmenting, and presenting rich interactive content. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/rich-interactive-narratives\/\">Learn more...<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"ShadowDraw: Interactive Sketching Helper\"]\r\n\r\nDo you want to be able to sketch or draw better? ShadowDraw is an interactive assistant for freehand drawing. It automatically recognizes what you\u2019re trying to draw and suggests new pen strokes for you to trace. As you draw new strokes, ShadowDraw refines its models in real time and provides new suggestions. ShadowDraw contains a large database of images with objects that a user might want to draw. The edges from any images that match the user\u2019s current drawing are merged and shown as suggested \"shadow strokes.\" The user then can trace these strokes to improve the drawing. <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/shadowdraw-real-time-user-guidance-for-freehand-drawing\/\">Learn more...<\/a>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Social News Search for Companies\"]\r\n\r\nSocial News Search for Companies uses social public data to build a great news portal for companies. The curation of this page can be crowdsourced to improve the quality of results. We tackle two questions: How can we use social media to provide a rich, topical, searchable, living news dashboard for any given company, and can we build an environment where the curation of the sources of content for a company page is done by the users of the page rather than by an editor? <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/group\/future-social-experiences-fuse-labs\/\">Learn more...<\/a>\r\n\r\n[\/panel]\r\n\r\n[\/accordion]"},{"id":2,"name":"Event Videos","content":"<h2>Watch the TechFest 2011 Videos<\/h2>\r\n[accordion]\r\n\r\n[panel header=\"3-D Scanning with a regular camera or phone!\"]\r\n\r\n<a href=\"https:\/\/channel9.msdn.com\/posts\/TechFest-2011-3D-Scanning-with-a-regular-camera-or-phone\">Watch video<\/a>\r\n\r\n3-D television is creating a huge buzz in the consumer space, but the generation of 3-D content remains a largely professional endeavor. Our research demonstrates an easy-to-use system for creating photorealistic, 3-D-image-based models simply by walking around an object of interest with your phone, still camera, or video camera. The objects might be your custom car or motorcycle, a wedding cake or dress, a rare musical instrument, or a hand-crafted artwork. Our system uses 3-D stereo matching techniques combined with image-based modeling and rendering to create a photorealistic model you can navigate simply by spinning it around on your screen, tablet, or mobile device.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"3-D, Photo-Real Talking Head\"]\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/3-d-photo-real-talking-head\/\">Watch video<\/a>\r\n\r\nDynamic texture mapping helps bypass the difficulties in rendering soft tissues like lips, tongue, eyes, and wrinkles, moving us one step closer to being able to create a more realistic personal avatar.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Applied Sciences Group: Smart Interactive Displays\"]\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/applied-sciences-group-smart-interactive-displays\/\">Watch video<\/a>\r\n\r\nSteven Bathiche, Director, Microsoft Applied Sciences, shares his team's latest work on the next generation of Smart Interactive Displays.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Facial Recognition in Videos\"]\r\n\r\n<a href=\"https:\/\/channel9.msdn.com\/posts\/TechFest-2011-Facial-Recognition-in-Videos\">Watch video<\/a>\r\n\r\nFace recognition in video is an emerging technology that will have great impact on user experience in fields such as television, gaming, and communication. In the near future, a television or an Xbox will be able to recognize people in the living room, home video will be annotated automatically and become searchable, and TV watchers will be able to get information about an unfamiliar actor, athlete, or singer just by pointing to the person on the screen. Our research showcases the face-recognition technology developed by Innovation Labs. Our technology includes novel algorithms in face detection, recognition, and tracking. The research demonstrates semi-automatic labeling of videos, a novel TV-watching experience using faces in a video as hyperlinks to get more information, and automatic recognition of the person in front of the television, Xbox, or computer.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"High-Performance Cancer Screening\"]\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/high-performance-cancer-screening\/\">Watch video<\/a>\r\n\r\nSee how a high\u2013performance, 3-D rendering engine can be transformed into a real-world, life-saving medical application.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"InnerEye: Visual Recognition in the Hospital\"]\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/innereye-visual-recognition-in-the-hospital\/\">Watch video<\/a>\r\n\r\nInnerEye focuses on the analysis of patient scans using machine learning techniques for automatic detection and segmentation of healthy anatomy as well as anomalies.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"MirageBlocks\"]\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/miragetable-freehand-interaction-on-a-projected-augmented-reality-tabletop\/\">Watch video<\/a>\r\n\r\nSee how simulating real-world physics behaviors can be used to manipulate virtual 3-D objects using 3-D projection and a Kinect depth camera.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Mobile Photography-Capture, process and View\"]\r\n\r\n<a href=\"https:\/\/channel9.msdn.com\/posts\/TechFest-2011-Mobile-Photography-Capture-process-and-View\">Watch video<\/a>\r\n\r\nThe mobile phone is becoming the most popular consumer camera. While the benefits are quite clear, the mobile scenario presents several challenges. It is not always easy to capture good photos. Image-processing tools can improve photos after capture, but there are few tools tailored to on-phone image manipulation. We present phone-based image enhancement tools that are tightly integrated with cloud services. Heavy computation is off-loaded to the cloud, which enables faster results without impacting the phone\u2019s performance.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"ShadowDraw\"]\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/shadowdraw-real-time-user-guidance-for-freehand-drawing\/\">Watch video<\/a>\r\n\r\nAn object-oriented research project delivers an interactive assistant for freehand drawing by recognizing what you\u2019re trying to draw and suggesting traceable pen strokes to improve your drawing.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"A montage of impact made by Microsoft Research\"]\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/video\/a-montage-of-impact-made-by-microsoft-research\/\">Watch video<\/a>\r\n\r\nNearly every product that Microsoft ships includes technology from Microsoft Research. Through exploration and collaboration with product groups and academic institutions, Microsoft Research advances the state of the art of computing.\r\n\r\n[\/panel]\r\n\r\n[\/accordion]"}],"msr_startdate":"2011-03-08","msr_enddate":"2011-03-08","msr_event_time":"","msr_location":"Redmond, WA, U.S.","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"March 8, 2011","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":null,"event_excerpt":"TechFest is an annual event, for Microsoft employees and guests, that showcases the most exciting research from Microsoft Research's locations around the world. Researchers share their latest work\u2014and the technologies emerging from those efforts. The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.","msr_research_lab":[199565],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[186002],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/199731","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":1,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/199731\/revisions"}],"predecessor-version":[{"id":1147443,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/199731\/revisions\/1147443"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=199731"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=199731"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=199731"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=199731"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=199731"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=199731"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=199731"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=199731"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=199731"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}