{"id":542,"date":"2014-08-11T08:45:00","date_gmt":"2014-08-11T08:45:00","guid":{"rendered":"https:\/\/blogs.technet.microsoft.com\/inside_microsoft_research\/2014\/08\/11\/microsoft-research-at-siggraph-2014\/"},"modified":"2016-07-20T07:29:49","modified_gmt":"2016-07-20T14:29:49","slug":"microsoft-research-at-siggraph-2014","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/microsoft-research-at-siggraph-2014\/","title":{"rendered":"Microsoft Research at SIGGRAPH 2014"},"content":{"rendered":"<p class=\"posted-by\">Posted by <span class=\"author\">Rob Knies<\/span><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/s2014.siggraph.org\/\" title=\"SIGGRAPH 2014\" target=\"_blank\"><img decoding=\"async\" src=\"https:\/\/msdnshared.blob.core.windows.net\/media\/TNBlogsFS\/prod.evol.blogs.technet.com\/CommunityServer.Blogs.Components.WeblogFiles\/00\/00\/00\/90\/35\/siggraph-2014-logo_345x90.png\" alt=\" \" style=\"float:right;margin:5px 8px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>Microsoft researchers will present a broad spectrum of new research at <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/s2014.siggraph.org\/\" title=\"SIGGRAPH 2014\" target=\"_blank\">SIGGRAPH 2014<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, the 41st International Conference and Exhibition on Computer Graphics and Interactive Techniques, which starts today in Vancouver, British Columbia. Sponsored by the Association for Computing Machinery, SIGGRAPH is at the cutting edge of research in computer graphics and related areas, such as computer vision and interactive systems. SIGGRAPH has evolved to become an international community of respected technical and creative individuals, attracting researchers, artists, developers, filmmakers, scientists, and business professionals from all over the world. The <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/research.microsoft.com\/en-us\/about\/siggraph-2014.aspx\" title=\"research presented from Microsoft\" target=\"_blank\">research presented from Microsoft<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> was developed across our global labs&mdash;from converting any camera into a depth-camera, to optimizing a scheme for clothing animation, and pushing the boundaries on new animated high-fidelity facial expression and performance capture techniques.<\/p>\n<p><iframe src=\"http:\/\/research.microsoft.com\/apps\/video\/ifVideo.aspx?id=226386\"><\/iframe><\/p>\n<h2>Depth camera and performance capture<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/research.microsoft.com\/en-us\/people\/shahrami\/\" title=\"Shahram Izadi\" target=\"_blank\">Shahram Izadi<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, principal researcher at Microsoft Research, and his collaborators will present two papers this year. The first, <em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/dl.acm.org\/ft_gateway.cfm?id=2601165&ftid=1488965&dwn=1&CFID=526567494&CFTOKEN=49504319\" title=\"Real-Time Non-Rigid Reconstruction Using an RGB-D Camera (29 MB .pdf)\" target=\"_blank\">Real-Time Non-Rigid Reconstruction Using an RGB-D Camera<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em>, alongside academic partners at Stanford, MPI, and Erlangen, demonstrates interactive performance capture using a novel GPU-based algorithm, and a high-resolution Kinect-like depth camera they have developed. The idea is to bring the level of performance capture that we see in Hollywood movies into our living rooms and everyday lives.<\/p>\n<p>The second, <em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/dl.acm.org\/ft_gateway.cfm?id=2601223&ftid=1488894&dwn=1&CFID=526567494&CFTOKEN=49504319\" title=\"Learning to Be a Depth Camera for Close-Range Human Capture and Interaction (5 MB .pdf)\" target=\"_blank\">Learning to Be a Depth Camera for Close-Range Human Capture and Interaction<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em>, has already garnered broad attention. The paper demonstrates how to turn any cheap, visible light camera&mdash;be it a web camera or even the camera on your mobile phone&mdash;into a depth sensor to create rich interactive scenarios. In describing the work, Izadi says, \"In recent years, we've seen a great deal of excitement regarding depth cameras such as the Kinect. These essentially enrich the ways that computers can see the world beyond a regular 2-D camera, and they aid in many tasks in computer vision, such as background segmentation, resolving scale and so forth. However, there are many scenarios that are currently prohibitive for depth cameras because of power consumption, size and cost.\"<\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/research.microsoft.com\/en-us\/people\/shahrami\/\" title=\"Shahram Izadi\" target=\"_blank\"><img decoding=\"async\" src=\"https:\/\/msdnshared.blob.core.windows.net\/media\/TNBlogsFS\/prod.evol.blogs.technet.com\/CommunityServer.Blogs.Components.WeblogFiles\/00\/00\/00\/90\/35\/shahram-izadi_250.jpg\" alt=\"Shahram Izadi\" style=\"float:right;margin:5px\" title=\"Shahram Izadi\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\"Our goal was to build a very cheap depth camera, around $1 in cost,\" says Izadi. The technique applies simple modifications to a regular RGB camera, and uses a new machine-learning algorithm based on decision trees, that can take the modified RGB images and automatically and accurately map them to depth. This allowed the team, with lead researchers <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/research.microsoft.com\/en-us\/people\/seanfa\/\" title=\"Sean Fanello\" target=\"_blank\">Sean Fanello<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and Cem Keskin, to turn any camera into a depth camera for scenarios where you want to specifically sense hands and faces of users. \"So it is not a general depth camera that can sense any object, but it works extremely well for hands and faces, which are important for creating interactive scenarios,\" says Izadi.<\/p>\n<p>So what challenges did the team encounter? \"The problem of inferring depth from intensity images is a challenging or even ill-posed problem within computer vision known as shape from shading,\" says Izadi. \"What we highlight is that by constraining the problem to interactive scenarios, and using active illumination, and state-of-the-art machine-learning techniques we can actual solve this problem for specific scenarios of use. It opens up many new areas of applications and research, because now depth cameras can be as cheap as any off-the-shelf web camera, and now anywhere a camera exists&mdash;such as in your mobile phone&mdash;a depth camera can also exist.\"<\/p>\n<h2>High-fidelity facial animation data<\/h2>\n<p>Another paper being presented at SIGGRAPH, <em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/dl.acm.org\/ft_gateway.cfm?id=2601210&ftid=1488851&dwn=1&CFID=526567494&CFTOKEN=49504319\" title=\"Controllable High-Fidelity Facial Performance Transfer (40 MB .pdf)\" target=\"_blank\">Controllable High-Fidelity Facial Performance Transfer<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em>, is the result of a collaboration between Feng Xu, associate researcher from Microsoft Research Asia, and researchers at Texas A&M and Tsinghua University. The paper introduces a novel facial expression transfer and editing technique for high-fidelity facial animation data.<\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/dl.acm.org\/ft_gateway.cfm?id=2601210&ftid=1488851&dwn=1&CFID=526567494&CFTOKEN=49504319\" title=\"Download the paper (40 MB .pdf)\" target=\"_blank\"><img decoding=\"async\" src=\"https:\/\/msdnshared.blob.core.windows.net\/media\/TNBlogsFS\/prod.evol.blogs.technet.com\/CommunityServer.Blogs.Components.WeblogFiles\/00\/00\/00\/90\/35\/controllable-high-fidelity-facial-transfer-550.jpg\" alt=\" Controllable High-Fidelity Facial Performance Transfer\" style=\"margin-left:auto;margin-right:auto\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>The key idea is to decompose high-fidelity facial performances into large-scale facial deformation, while delivering fine-scale facial details, and reconstructing them to the desired retargeted animation. This approach provides a new technique to animate a digital character that is not necessarily a digital replica of the performer. The approach takes a source reference expression, a source facial animation sequence, and a target expression as input, and outputs a retargeted animation sequence that is \"analogous\" to the source animation. Importantly, it allows the user to control and adjust both the large-scale deformation and fine-scale facial details of the retargeted animation, reducing the manual work an animator is typically required to correct or adjust.<\/p>\n<p>\"More and more high-fidelity facial data is captured by some recent techniques,\" says Xu. \"Our technique aims to reuse existing high-fidelity facial data to generate animations on new characters or avatars. Besides faithfully transfer the input facial performance by our decomposition scheme, we give easy and flexible control to users to further edit both the large-scale motion and facial details. This user control makes it possible to get good results on a target with large shape difference to the source, like a dog or a monster, also, it is possible to change the style of the capture motion to satisfy user's requirement, which are useful for animators to generate animations.<\/p>\n<h2>Hyper-lapse video conversion<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/johanneskopf.de\/\" title=\"Johannes Kopf\" target=\"_blank\"><img decoding=\"async\" src=\"https:\/\/msdnshared.blob.core.windows.net\/media\/TNBlogsFS\/prod.evol.blogs.technet.com\/CommunityServer.Blogs.Components.WeblogFiles\/00\/00\/00\/90\/35\/johannes-kopf_250.png\" alt=\"Johannes Kopf\" style=\"float:right;margin:5px\" title=\"Johannes Kopf\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>No strangers to SIGGRAPH are the authors of the <em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/dl.acm.org\/ft_gateway.cfm?id=2601195&ftid=1488886&dwn=1&CFID=526567494&CFTOKEN=49504319\" title=\"First-Person Hyper-Lapse Videos (37 MB .pdf)\" target=\"_blank\">First-Person Hyper-Lapse Video<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> paper, Microsoft researcher <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/johanneskopf.de\/\" title=\"Johannes Kopf\" target=\"_blank\">Johannes Kopf<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> (<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"https:\/\/x.com\/JPKopf\" title=\"Johannes Kopf on Twitter\" target=\"_blank\">@JPKopf<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>), principal researcher <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/research.microsoft.com\/en-us\/um\/people\/cohen\/\" title=\"Michael Cohen\" target=\"_blank\">Michael Cohen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/research.microsoft.com\/en-us\/um\/people\/szeliski\/\" title=\"Richard Szeliski\" target=\"_blank\">Richard Szeliski<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> (<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"https:\/\/x.com\/szeliski\" title=\"Richard Szeliski on Twitter\" target=\"_blank\">@szeliski<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>), distinguished scientist and previous SIGGRAPH Computer Graphics Achievement award winner. The paper provides a method for converting first-person videos into hyper-lapse videos. Seeing is believing and the results are astounding, which you can view on <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"https:\/\/youtu.be\/SOpwHaQnRSY\" title=\"First-Person Hyper-Lapse Video\" target=\"_blank\">this video<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> showcasing their results. More information on this work can be found in a <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/blogs.microsoft.com\/next\/2014\/08\/11\/hyperlapse-siggraph-2014\/\" title=\"Next at Microsoft blog post\" target=\"_blank\">recent Next at Microsoft blog post<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n<p>Izadi sums up: \"SIGGRAPH is one of the premier conferences in computer science, with one of the highest impact factors. It is also the intersection of many different research fields, not just computer graphics, but <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/research.microsoft.com\/en-us\/about\/our-research\/human-computer-interaction.aspx\" title=\"human-computer interaction at Microsoft\" target=\"_blank\">human-computer interaction<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/research.microsoft.com\/en-us\/about\/our-research\/computer-vision.aspx\" title=\"computer vision at Microsoft\" target=\"_blank\">computer vision<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/research.microsoft.com\/en-us\/about\/our-research\/machine-learning.aspx\" title=\"machine learning and intelligence at Microsoft\" target=\"_blank\">machine learning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and even new sensors, displays and hardware. So there's inspiration to draw from many research areas, and it nicely complements the multi-discipline nature of Microsoft Research. There are not only great technical talks at the conference, including 10 from Microsoft Research, but also E-tech, which showcases lots of demos, and highlights the interactive element of the conference, which is something that really resonates with us.\"<\/p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Posted by Rob Knies Microsoft researchers will present a broad spectrum of new research at SIGGRAPH 2014, the 41st International Conference and Exhibition on Computer Graphics and Interactive Techniques, which starts today in Vancouver, British Columbia. Sponsored by the Association for Computing Machinery, SIGGRAPH is at the cutting edge of research in computer graphics and [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[],"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-542","post","type-post","status-publish","format-standard","hentry","category-research-blog","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","byline":"","formattedDate":"August 11, 2014","formattedExcerpt":"Posted by Rob Knies Microsoft researchers will present a broad spectrum of new research at SIGGRAPH 2014, the 41st International Conference and Exhibition on Computer Graphics and Interactive Techniques, which starts today in Vancouver, British Columbia. Sponsored by the Association for Computing Machinery, SIGGRAPH is&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/542","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=542"}],"version-history":[{"count":1,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/542\/revisions"}],"predecessor-version":[{"id":260985,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/542\/revisions\/260985"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=542"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=542"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=542"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=542"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=542"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=542"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=542"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=542"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=542"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=542"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=542"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}