{"id":188811,"date":"2012-12-04T00:00:00","date_gmt":"2012-12-10T06:43:39","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/msr-research-item\/from-motion-capture-of-interacting-hands-to-video-based-rendering\/"},"modified":"2016-08-22T11:27:17","modified_gmt":"2016-08-22T18:27:17","slug":"from-motion-capture-of-interacting-hands-to-video-based-rendering","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/from-motion-capture-of-interacting-hands-to-video-based-rendering\/","title":{"rendered":"From motion capture of interacting hands to video based rendering."},"content":{"rendered":"<div class=\"asset-content\">\n<p>The talk will be structured in two parts.<br \/>\nI will first talk about my research on marker-less motion capture, in particular presenting my recent ECCV work on hands tracking. Capturing the motion of hands interacting with each other and with an object is a challenging task due to the large number of degrees of freedom, self-occlusions, and similarity between the fingers. I will show how we addressed this problem by proposing a generative approach modeling all the elements of the scene as a unique articulated deformable object, and exploiting some discriminatively learnt features.<br \/>\nIn the second part, I will talk about my research on interactive free-viewpoint video. I will show how it is possible to navigate a collection of videos representing an event casually captured by people in the audience. Instead of recovering a perfect 3D representation of the recorded scene, which in general is very challenging, I will propose several simplifying assumptions allowing video based rendering in such a scenario.<br \/>\nI will also show a live real-time demo of the proposed interactive navigation tool on different video collections.<br \/>\nURLs:<br \/>\nhttp:\/\/cvg.ethz.ch\/research\/ih-mocap\/<br \/>\nhttp:\/\/cvg.ethz.ch\/research\/unstructured-vbr\/<br \/>\nhttp:\/\/www.inf.ethz.ch\/personal\/lballan\/<\/p>\n<\/div>\n<p><!-- .asset-content --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The talk will be structured in two parts. I will first talk about my research on marker-less motion capture, in particular presenting my recent ECCV work on hands tracking. Capturing the motion of hands interacting with each other and with an object is a challenging task due to the large number of degrees of freedom, [&hellip;]<\/p>\n","protected":false},"featured_media":197347,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_hide_image_in_river":0,"footnotes":""},"research-area":[],"msr-video-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-session-type":[],"msr-impact-theme":[],"msr-pillar":[],"msr-episode":[],"msr-research-theme":[],"class_list":["post-188811","msr-video","type-msr-video","status-publish","has-post-thumbnail","hentry","msr-locale-en_us"],"msr_download_urls":"","msr_external_url":"https:\/\/youtu.be\/h1cvH-QVUwk","msr_secondary_video_url":"","msr_video_file":"","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/188811","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-video"}],"version-history":[{"count":0,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/188811\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/197347"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=188811"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=188811"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=188811"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=188811"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=188811"},{"taxonomy":"msr-session-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-session-type?post=188811"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=188811"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=188811"},{"taxonomy":"msr-episode","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-episode?post=188811"},{"taxonomy":"msr-research-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-theme?post=188811"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}