{"id":585271,"date":"2020-02-26T14:04:05","date_gmt":"2019-05-15T16:42:10","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&#038;p=585271"},"modified":"2025-08-06T11:53:23","modified_gmt":"2025-08-06T18:53:23","slug":"the-1st-workshop-on-hot-topics-in-video-analytics-and-intelligent-edges","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/the-1st-workshop-on-hot-topics-in-video-analytics-and-intelligent-edges\/","title":{"rendered":"The 1st Workshop on Hot Topics in Video Analytics and Intelligent Edges"},"content":{"rendered":"\n\n<p>(in conjunction with <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.sigmobile.org\/mobicom\/2019\/\">MobiCom 2019<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>)<\/p>\n<p><strong>Paper Submissions Deadline:<\/strong> July 9, 2019<\/p>\n<p><strong>CFP:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/HotEdgeVideoFlyer_letter.pdf\">HotEdgeVideo19.pdf<\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><span data-contrast=\"auto\">Cameras are everywhere!\u00a0<\/span><span data-contrast=\"auto\">Analyzing live videos from these cameras has great potential to impact science and society.<\/span><span data-contrast=\"auto\">\u00a0Enterprise cameras are deployed for a wide variety of commercial and security reasons. Consumer devices themselves have cameras with users interested in analyzing live videos from these devices. We are all living in the golden era for computer vision and AI that is being fueled by game-changing systemic infrastructure advancements, breakthroughs in machine learning, and copious training data, largely improving their range of capabilities. Live video analytics has the potential to impact a wide range of verticals ranging from public safety, traffic efficiency, infrastructure planning, entertainment, and home safety.<\/span><\/p>\n<p><span data-contrast=\"auto\">Analyzing live video streams is arguably the most challenging of domains for &#8220;systems-for-AI&#8221;. Unlike text or numeric processing, video analytics require higher bandwidth, consume considerable compute cycles for processing, necessitate richer query semantics, and demand tighter security & privacy guarantees. Video analytics has a symbiotic relationship with edge compute infrastructure. Edge computing makes compute resources available closer to the data sources (i.e., cameras). All aspects of video analytics call to be designed \u201cgreen-field\u201d, from vision algorithms, to the systems processing stack and networking links, and hybrid edge-cloud infrastructure. Such a holistic design will enable the democratization of live video analytics such that any organization with cameras can obtain value from video analytics.<\/span><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><strong>Topics of Interest:<\/strong><br \/>\nThis workshop calls for research on various issues and solutions that can enable live video analytics with the role for edge computing. Topics of interest include (but not limited to) the following:<\/p>\n<ul>\n<li>Low-cost video analytics<\/li>\n<li>Deployment experience with large array of cameras<\/li>\n<li>Storage of video data and metadata<\/li>\n<li>Interactive querying of video streams<\/li>\n<li>Network design for video streams<\/li>\n<li>Hybrid cloud architectures for video processing<\/li>\n<li>Scheduling for multi-tenant video processing<\/li>\n<li>Training of vision neural networks<\/li>\n<li>Edge-based processor architectures for video processing<\/li>\n<li>Energy-efficient system design for video analytics<\/li>\n<li>Intelligent camera designs<\/li>\n<li>Vehicular and drone-based video analytics<\/li>\n<li>Tools and datasets for video analytics systems<\/li>\n<li>Novel vision applications<\/li>\n<li>Video analytics for social good<\/li>\n<li>Secure processing of video analytics<\/li>\n<li>Privacy-preserving techniques for video processing<\/li>\n<\/ul>\n<p><strong>Submission Instructions:<\/strong><br \/>\nSubmissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be no longer than five (5) pages, including all figures, tables, followed by as many pages as necessary for bibliographic references. Submissions should be in two-column 10pt ACM format with authors names and affiliations for single-blind peer review. The workshop also solicits the submission of research, platform, and product demonstrations. Demo submission should be a summary or extended abstract describing the research to be presented, maximum one (1) page with font no smaller than 10 point size, in PDF file format. Demo submission title should begin with &#8220;Demo:&#8221;.<\/p>\n<p>Authors of accepted papers are expected to present their work at the workshop. Papers accepted for presentation will be published in the MobiCom Workshop Proceedings, and available at the ACM Digital Library. You may find these <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/sigmobile.org\/mobicom\/2018\/files\/sig-alternate-10pt.cls\">LaTeX<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/sigmobile.org\/mobicom\/2018\/files\/word-acm-10pt-on-12pt-7.0x9.25.doc\">MS-Word<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> templates useful in complying with the above requirements.<\/p>\n<p>Submit your paper at <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/hotedgevideo19.hotcrp.com\/\">https:\/\/hotedgevideo19.hotcrp.com\/<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n<p><strong>Important Dates:<\/strong><br \/>\nPaper Submissions Deadline: <strong>July 9, 2019<\/strong><br \/>\nAcceptance Notification: <strong>July 31, 2019<\/strong><br \/>\nCamera-ready Papers Due: <strong>August 15, 2019<\/strong><br \/>\nWorkshop Date: <strong>October 21, 2019<\/strong><\/p>\n<p>&nbsp;<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>Program Committee:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ga\">Ganesh Ananthanarayanan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft Research (co-chair)<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/apollo.smu.edu.sg\/\">Rajesh Balan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Singapore Management University<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.winlab.rutgers.edu\/~gruteser\">Marco Gruester<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Rutgers University<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/people.cs.uchicago.edu\/~junchenj\">Junchen Jiang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Chicago<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.roblkw.com\">Robert LiKamWa<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Arizona State University<\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yunliu\">Yunxin Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft Research (co-chair)<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.cc.gatech.edu\/~rama\">Kishore Ramachandran<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Georgia Tech<\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yushu\">Yuanchao Shu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft Research (co-chair)<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/people.cs.uchicago.edu\/~htzheng\">Heather Zheng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Chicago<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><font color=\"red\">09:00 &#8211; 09:10<\/font> Opening remarks<\/p>\n<p><font color=\"red\">09:10 &#8211; 10:10<\/font> Keynote I &#8211; <b>Deep Learning in Mobile Systems, Experiences and Pitfalls<\/b><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/people.cs.uchicago.edu\/~htzheng\/\">Prof. Heather Zheng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Chicago<\/p>\n<p>Deep learning (neural networks) is being rapidly adopted by (mobile) researchers and companies to solve a wide range of computational problems.  But is it a panacea to all the (traditionally hard) problems?   In this talk, I will share experiences from my lab on applying today\u2019s deep learning models to mobile systems design, and discuss vulnerabilities inherent in many existing deep learning models that make them easy to compromise, as well as potential defenses. <\/p>\n<p><u>Bio:<\/u> Dr. Heather Zheng is the Neubauer Professor of Computer Science at University of Chicago. She received her PhD in Electrical and Computer Engineering from University of Maryland, College Park in 1999. She joined University of Chicago after spending 6 years in industry labs (Bell-Labs, NJ and Microsoft Research Asia), and 12 years at University of California at Santa Barbara. At UChicago, she co-directs the SAND Lab (Systems, Algorithms, Networking and Data).  She is an IEEE Fellow, World Technology Network Fellow, and recipient of MIT Technology Review&#8217;s TR-35 Award (Young Innovators under 35),  Bell-Labs President\u2019s Gold award, and Google Faculty award. Her work has been covered by media outlets such as Scientific American, New York Times, Boston Globe, LA Times, and MIT Tech Review.  She served as PC co-chair for MobiCom and DySPAN,  and is the general co-chair for Hotnets 2020.  She is on the steering committee of MobiCom, and is the chair of SIGMOBILE Highlights committee. <\/p>\n<p><font color=\"red\">10:10 &#8211; 10:40<\/font> Break<\/p>\n<hr>\n<p><font color=\"red\">10:40 &#8211; 11:40<\/font> Session 1 &#8211; <i>Cameras are becoming smarter<\/i><\/p>\n<p><strong>Networked Cameras Are the New Big Data Clusters<\/strong><br \/>\nJunchen Jiang, Yuhao Zhou (University of Chicago), Ganesh Ananthanarayanan, Yuanchao Shu (Microsoft Research), Andrew A. Chien (University of Chicago)<\/p>\n<p><strong>Live Video Analytics with FPGA-based Smart Cameras<\/strong><br \/>\nShang Wang (University of Electronic Science and Technology of China, Microsoft Research), Chen Zhang, Yuanchao Shu, Yunxin Liu (Microsoft Research)<\/p>\n<p><strong>Space-Time Vehicle Tracking at the Edge of the Network<\/strong><br \/>\nZhuangdi Xu, Kishore Ramachandran (Georgia Tech), Sayan Sinha (Indian Institute of Technology Kharagpur)<\/p>\n<p><font color=\"red\">11:40 &#8211; 13:00<\/font> Lunch<\/p>\n<hr>\n<p><font color=\"red\">13:00 &#8211; 14:00<\/font> Keynote II &#8211; <b>360\u25e6 and 4K Video Streaming for Mobile Devices<\/b><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.cs.utexas.edu\/~lili\/\">Prof. Lili Qiu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Texas at Austin<\/p>\n<p>The popularity of 360\u25e6 and\/or 4K videos has grown rapidly due to the immersive user experience. 360\u25e6 videos are displayed as a panorama and the view automatically adapts with the head movement. Existing systems stream 360\u25e6 videos in a similar way as regular videos, where all data of the panoramic view is transmitted. This is wasteful since a user only views a small portion of the 360\u25e6 view. To save bandwidth, recent works propose the tile-based streaming, which divides the panoramic view to multiple smaller sized tiles and streams only the tiles within a user\u2019s field of view (FoV) predicted based on the recent head position. Interestingly, the tile-based streaming has only been simulated or implemented on desktops. We find that it cannot run in real-time even on the latest smartphone (e.g., Samsung S7, Samsung S8 and Huawei Mate 9) due to hardware and software limitations. Moreover, it results in significant video quality degradation due to head movement prediction error, which is hard to avoid. Motivated by these observations, we develop a novel tile-based layered approach to stream 360\u25e6 content on smartphones to avoid bandwidth wastage while maintaining high video quality.<\/p>\n<p>Next we explore the feasibility of supporting live 4K video streaming over wireless networks using commodity devices. Coding and streaming live 4K videos incurs prohibitive cost to the network and end system. We propose a novel system, which consists of (i) easy-to-compute layered video coding to seamlessly adapt to unpredictable wireless link fluctuations, (ii) efficient GPU implementation of video coding on commodity devices, and (iii) effectively leveraging both WiFi and WiGig through delayed video adaptation and smart scheduling.  Using real experiments and emulation, we demonstrate the feasibility and effectiveness of our system.<\/p>\n<p><u>Bio:<\/u> Lili Qiu is a Professor at Computer Science Dept. in UT Austin. She received a Ph.D. in Computer Science from Cornell University in 2001. She was a researcher at Microsoft Research (Redmond, WA) in 2001 &#8212; 2004. She joined UT Austin in 2005. She is named IEEE Fellow, ACM Fellow, and ACM Distinguished Scientist. She has also received a NSF CAREER Award, Google Faculty Research Award, and best paper awards in Mobisys\u201918 and ICNP\u201917.  <\/p>\n<p><font color=\"red\">14:00 &#8211; 14:30<\/font> Break<\/p>\n<hr>\n<p><font color=\"red\">14:30 &#8211; 15:30<\/font> Session 2 &#8211; <i>ML for videos<\/i><\/p>\n<p><strong>Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems<\/strong><br \/>\nYoshitomo Matsubara, Sabur Hassan Baidya, Davide Callegaro, Marco Levorato, Sameer Singh (University of California, Irvine)<\/p>\n<p><strong>Cracking open the DNN black-box: Video Analytics with DNNs across the Camera-Cloud Boundary<\/strong><br \/>\nJohn Emmons, Sadjad Fouladi (Stanford University), Ganesh Ananthanarayanan (Microsoft Research), Shivaram Venkataraman (University of Wisconsin-Madison), Silvio Savarese, Keith Winstein (Stanford University)<\/p>\n<p><strong>secGAN: A Cycle-Consistent GAN for Securely-Recoverable Video Transformation<\/strong><br \/>\nHao Wu, Jinghao Feng, Xuejin Tian, Fengyuan Xu, Sheng Zhong (Nanjing University), Yunxin Liu (Microsoft Research), XiaoFeng Wang (Indiana University Bloomington)<\/p>\n<p><font color=\"red\">15:00 &#8211; 15:30<\/font> Break<\/p>\n<hr>\n<p><font color=\"red\">15:30 &#8211; 16:10<\/font> Session 3 &#8211; <i>Playing nice with the network<\/i><\/p>\n<p><strong>Client-side Bandwidth Estimation Technique for Adaptive Streaming of a Browser Based Free-Viewpoint Application<\/strong><br \/>\nTilak Varisetty, David Dietrich (Leibniz Universit\u00e4t Hannover)<\/p>\n<p><strong>Sensor Training Data Reduction for Autonomous Vehicles<\/strong><br \/>\nMatthew Tomei, Alex Schwing, (University of Illinois at Urbana-Champaign), Satish Narayanasamy (University of Michigan), Rakesh Kumar (University of Illinois at Urbana-Champaign)<\/p>\n<p><font color=\"red\">16:10 &#8211; 17:00<\/font> Poster and demo session<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>(in conjunction with MobiCom 2019 (opens in new tab)) Paper Submissions Deadline: July 9, 2019 CFP: HotEdgeVideo19.pdfOpens in a new tab Cameras are everywhere!\u00a0Analyzing live videos from these cameras has great potential to impact science and society.\u00a0Enterprise cameras are deployed for a wide variety of commercial and security reasons. Consumer devices themselves have cameras with [&hellip;]<\/p>\n","protected":false},"featured_media":585295,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2019-10-21","msr_enddate":"2019-10-21","msr_location":"Los Cabos, Mexico","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13556,13562,13563,13547],"msr-region":[197901],"msr-event-type":[210063],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[243724],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-585271","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-research-area-data-platform-analytics","msr-research-area-systems-and-networking","msr-region-latin-america","msr-event-type-workshop","msr-locale-en_us","msr-program-audience-students"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"The 1st Workshop on Hot Topics in Video Analytics and Intelligent Edges\",\"backgroundColor\":\"grey\",\"image\":{\"id\":585295,\"url\":\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/los_cabos_header2.jpg\",\"alt\":\"\"}} \/-->\n\n<!-- wp:msr\/content-tabs --><!-- wp:msr\/content-tab {\"title\":\"About\"} --><!-- wp:freeform --><p>(in conjunction with <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.sigmobile.org\/mobicom\/2019\/\">MobiCom 2019<\/a>)<\/p>\n<p><strong>Paper Submissions Deadline:<\/strong> July 9, 2019<\/p>\n<p><strong>CFP:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/HotEdgeVideoFlyer_letter.pdf\">HotEdgeVideo19.pdf<\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><span data-contrast=\"auto\">Cameras are everywhere!\u00a0<\/span><span data-contrast=\"auto\">Analyzing live videos from these cameras has great potential to impact science and society.<\/span><span data-contrast=\"auto\">\u00a0Enterprise cameras are deployed for a wide variety of commercial and security reasons. Consumer devices themselves have cameras with users interested in analyzing live videos from these devices. We are all living in the golden era for computer vision and AI that is being fueled by game-changing systemic infrastructure advancements, breakthroughs in machine learning, and copious training data, largely improving their range of capabilities. Live video analytics has the potential to impact a wide range of verticals ranging from public safety, traffic efficiency, infrastructure planning, entertainment, and home safety.<\/span><\/p>\n<p><span data-contrast=\"auto\">Analyzing live video streams is arguably the most challenging of domains for &#8220;systems-for-AI&#8221;. Unlike text or numeric processing, video analytics require higher bandwidth, consume considerable compute cycles for processing, necessitate richer query semantics, and demand tighter security &amp; privacy guarantees. Video analytics has a symbiotic relationship with edge compute infrastructure. Edge computing makes compute resources available closer to the data sources (i.e., cameras). All aspects of video analytics call to be designed \u201cgreen-field\u201d, from vision algorithms, to the systems processing stack and networking links, and hybrid edge-cloud infrastructure. Such a holistic design will enable the democratization of live video analytics such that any organization with cameras can obtain value from video analytics.<\/span><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Call for Papers\"} --><!-- wp:freeform --><p><strong>Topics of Interest:<\/strong><br \/>\nThis workshop calls for research on various issues and solutions that can enable live video analytics with the role for edge computing. Topics of interest include (but not limited to) the following:<\/p>\n<ul>\n<li>Low-cost video analytics<\/li>\n<li>Deployment experience with large array of cameras<\/li>\n<li>Storage of video data and metadata<\/li>\n<li>Interactive querying of video streams<\/li>\n<li>Network design for video streams<\/li>\n<li>Hybrid cloud architectures for video processing<\/li>\n<li>Scheduling for multi-tenant video processing<\/li>\n<li>Training of vision neural networks<\/li>\n<li>Edge-based processor architectures for video processing<\/li>\n<li>Energy-efficient system design for video analytics<\/li>\n<li>Intelligent camera designs<\/li>\n<li>Vehicular and drone-based video analytics<\/li>\n<li>Tools and datasets for video analytics systems<\/li>\n<li>Novel vision applications<\/li>\n<li>Video analytics for social good<\/li>\n<li>Secure processing of video analytics<\/li>\n<li>Privacy-preserving techniques for video processing<\/li>\n<\/ul>\n<p><strong>Submission Instructions:<\/strong><br \/>\nSubmissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be no longer than five (5) pages, including all figures, tables, followed by as many pages as necessary for bibliographic references. Submissions should be in two-column 10pt ACM format with authors names and affiliations for single-blind peer review. The workshop also solicits the submission of research, platform, and product demonstrations. Demo submission should be a summary or extended abstract describing the research to be presented, maximum one (1) page with font no smaller than 10 point size, in PDF file format. Demo submission title should begin with &#8220;Demo:&#8221;.<\/p>\n<p>Authors of accepted papers are expected to present their work at the workshop. Papers accepted for presentation will be published in the MobiCom Workshop Proceedings, and available at the ACM Digital Library. You may find these <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/sigmobile.org\/mobicom\/2018\/files\/sig-alternate-10pt.cls\">LaTeX<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/sigmobile.org\/mobicom\/2018\/files\/word-acm-10pt-on-12pt-7.0x9.25.doc\">MS-Word<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> templates useful in complying with the above requirements.<\/p>\n<p>Submit your paper at <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/hotedgevideo19.hotcrp.com\/\">https:\/\/hotedgevideo19.hotcrp.com\/<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n<p><strong>Important Dates:<\/strong><br \/>\nPaper Submissions Deadline: <strong>July 9, 2019<\/strong><br \/>\nAcceptance Notification: <strong>July 31, 2019<\/strong><br \/>\nCamera-ready Papers Due: <strong>August 15, 2019<\/strong><br \/>\nWorkshop Date: <strong>October 21, 2019<\/strong><\/p>\n<p>&nbsp;<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Organizers\"} --><!-- wp:freeform --><p>Program Committee:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ga\">Ganesh Ananthanarayanan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft Research (co-chair)<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/apollo.smu.edu.sg\/\">Rajesh Balan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Singapore Management University<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.winlab.rutgers.edu\/~gruteser\">Marco Gruester<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Rutgers University<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/people.cs.uchicago.edu\/~junchenj\">Junchen Jiang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Chicago<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/www.roblkw.com\">Robert LiKamWa<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Arizona State University<\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yunliu\">Yunxin Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft Research (co-chair)<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.cc.gatech.edu\/~rama\">Kishore Ramachandran<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Georgia Tech<\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yushu\">Yuanchao Shu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Microsoft Research (co-chair)<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"http:\/\/people.cs.uchicago.edu\/~htzheng\">Heather Zheng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Chicago<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Program\"} --><!-- wp:freeform --><p><font color=\"red\">09:00 &#8211; 09:10<\/font> Opening remarks<\/p>\n<p><font color=\"red\">09:10 &#8211; 10:10<\/font> Keynote I &#8211; <b>Deep Learning in Mobile Systems, Experiences and Pitfalls<\/b><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/people.cs.uchicago.edu\/~htzheng\/\">Prof. Heather Zheng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Chicago<\/p>\n<p>Deep learning (neural networks) is being rapidly adopted by (mobile) researchers and companies to solve a wide range of computational problems.  But is it a panacea to all the (traditionally hard) problems?   In this talk, I will share experiences from my lab on applying today\u2019s deep learning models to mobile systems design, and discuss vulnerabilities inherent in many existing deep learning models that make them easy to compromise, as well as potential defenses. <\/p>\n<p><u>Bio:<\/u> Dr. Heather Zheng is the Neubauer Professor of Computer Science at University of Chicago. She received her PhD in Electrical and Computer Engineering from University of Maryland, College Park in 1999. She joined University of Chicago after spending 6 years in industry labs (Bell-Labs, NJ and Microsoft Research Asia), and 12 years at University of California at Santa Barbara. At UChicago, she co-directs the SAND Lab (Systems, Algorithms, Networking and Data).  She is an IEEE Fellow, World Technology Network Fellow, and recipient of MIT Technology Review&#8217;s TR-35 Award (Young Innovators under 35),  Bell-Labs President\u2019s Gold award, and Google Faculty award. Her work has been covered by media outlets such as Scientific American, New York Times, Boston Globe, LA Times, and MIT Tech Review.  She served as PC co-chair for MobiCom and DySPAN,  and is the general co-chair for Hotnets 2020.  She is on the steering committee of MobiCom, and is the chair of SIGMOBILE Highlights committee. <\/p>\n<p><font color=\"red\">10:10 &#8211; 10:40<\/font> Break<\/p>\n<hr>\n<p><font color=\"red\">10:40 &#8211; 11:40<\/font> Session 1 &#8211; <i>Cameras are becoming smarter<\/i><\/p>\n<p><strong>Networked Cameras Are the New Big Data Clusters<\/strong><br \/>\nJunchen Jiang, Yuhao Zhou (University of Chicago), Ganesh Ananthanarayanan, Yuanchao Shu (Microsoft Research), Andrew A. Chien (University of Chicago)<\/p>\n<p><strong>Live Video Analytics with FPGA-based Smart Cameras<\/strong><br \/>\nShang Wang (University of Electronic Science and Technology of China, Microsoft Research), Chen Zhang, Yuanchao Shu, Yunxin Liu (Microsoft Research)<\/p>\n<p><strong>Space-Time Vehicle Tracking at the Edge of the Network<\/strong><br \/>\nZhuangdi Xu, Kishore Ramachandran (Georgia Tech), Sayan Sinha (Indian Institute of Technology Kharagpur)<\/p>\n<p><font color=\"red\">11:40 &#8211; 13:00<\/font> Lunch<\/p>\n<hr>\n<p><font color=\"red\">13:00 &#8211; 14:00<\/font> Keynote II &#8211; <b>360\u25e6 and 4K Video Streaming for Mobile Devices<\/b><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.cs.utexas.edu\/~lili\/\">Prof. Lili Qiu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Texas at Austin<\/p>\n<p>The popularity of 360\u25e6 and\/or 4K videos has grown rapidly due to the immersive user experience. 360\u25e6 videos are displayed as a panorama and the view automatically adapts with the head movement. Existing systems stream 360\u25e6 videos in a similar way as regular videos, where all data of the panoramic view is transmitted. This is wasteful since a user only views a small portion of the 360\u25e6 view. To save bandwidth, recent works propose the tile-based streaming, which divides the panoramic view to multiple smaller sized tiles and streams only the tiles within a user\u2019s field of view (FoV) predicted based on the recent head position. Interestingly, the tile-based streaming has only been simulated or implemented on desktops. We find that it cannot run in real-time even on the latest smartphone (e.g., Samsung S7, Samsung S8 and Huawei Mate 9) due to hardware and software limitations. Moreover, it results in significant video quality degradation due to head movement prediction error, which is hard to avoid. Motivated by these observations, we develop a novel tile-based layered approach to stream 360\u25e6 content on smartphones to avoid bandwidth wastage while maintaining high video quality.<\/p>\n<p>Next we explore the feasibility of supporting live 4K video streaming over wireless networks using commodity devices. Coding and streaming live 4K videos incurs prohibitive cost to the network and end system. We propose a novel system, which consists of (i) easy-to-compute layered video coding to seamlessly adapt to unpredictable wireless link fluctuations, (ii) efficient GPU implementation of video coding on commodity devices, and (iii) effectively leveraging both WiFi and WiGig through delayed video adaptation and smart scheduling.  Using real experiments and emulation, we demonstrate the feasibility and effectiveness of our system.<\/p>\n<p><u>Bio:<\/u> Lili Qiu is a Professor at Computer Science Dept. in UT Austin. She received a Ph.D. in Computer Science from Cornell University in 2001. She was a researcher at Microsoft Research (Redmond, WA) in 2001 &#8212; 2004. She joined UT Austin in 2005. She is named IEEE Fellow, ACM Fellow, and ACM Distinguished Scientist. She has also received a NSF CAREER Award, Google Faculty Research Award, and best paper awards in Mobisys\u201918 and ICNP\u201917.  <\/p>\n<p><font color=\"red\">14:00 &#8211; 14:30<\/font> Break<\/p>\n<hr>\n<p><font color=\"red\">14:30 &#8211; 15:30<\/font> Session 2 &#8211; <i>ML for videos<\/i><\/p>\n<p><strong>Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems<\/strong><br \/>\nYoshitomo Matsubara, Sabur Hassan Baidya, Davide Callegaro, Marco Levorato, Sameer Singh (University of California, Irvine)<\/p>\n<p><strong>Cracking open the DNN black-box: Video Analytics with DNNs across the Camera-Cloud Boundary<\/strong><br \/>\nJohn Emmons, Sadjad Fouladi (Stanford University), Ganesh Ananthanarayanan (Microsoft Research), Shivaram Venkataraman (University of Wisconsin-Madison), Silvio Savarese, Keith Winstein (Stanford University)<\/p>\n<p><strong>secGAN: A Cycle-Consistent GAN for Securely-Recoverable Video Transformation<\/strong><br \/>\nHao Wu, Jinghao Feng, Xuejin Tian, Fengyuan Xu, Sheng Zhong (Nanjing University), Yunxin Liu (Microsoft Research), XiaoFeng Wang (Indiana University Bloomington)<\/p>\n<p><font color=\"red\">15:00 &#8211; 15:30<\/font> Break<\/p>\n<hr>\n<p><font color=\"red\">15:30 &#8211; 16:10<\/font> Session 3 &#8211; <i>Playing nice with the network<\/i><\/p>\n<p><strong>Client-side Bandwidth Estimation Technique for Adaptive Streaming of a Browser Based Free-Viewpoint Application<\/strong><br \/>\nTilak Varisetty, David Dietrich (Leibniz Universit\u00e4t Hannover)<\/p>\n<p><strong>Sensor Training Data Reduction for Autonomous Vehicles<\/strong><br \/>\nMatthew Tomei, Alex Schwing, (University of Illinois at Urbana-Champaign), Satish Narayanasamy (University of Michigan), Rakesh Kumar (University of Illinois at Urbana-Champaign)<\/p>\n<p><font color=\"red\">16:10 &#8211; 17:00<\/font> Poster and demo session<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- \/wp:msr\/content-tabs -->","tab-content":[{"id":0,"name":"About","content":"<span data-contrast=\"auto\">Cameras are everywhere!\u00a0<\/span><span data-contrast=\"auto\">Analyzing live videos from these cameras has great potential to impact science and society.<\/span><span data-contrast=\"auto\">\u00a0Enterprise cameras are deployed for a wide variety of commercial and security reasons. Consumer devices themselves have cameras with users interested in analyzing live videos from these devices. We are all living in the golden era for computer vision and AI that is being fueled by game-changing systemic infrastructure advancements, breakthroughs in machine learning, and copious training data, largely improving their range of capabilities. Live video analytics has the potential to impact a wide range of verticals ranging from public safety, traffic efficiency, infrastructure planning, entertainment, and home safety.<\/span>\r\n\r\n<span data-contrast=\"auto\">Analyzing live video streams is arguably the most challenging of domains for \"systems-for-AI\". Unlike text or numeric processing, video analytics require higher bandwidth, consume considerable compute cycles for processing, necessitate richer query semantics, and demand tighter security &amp; privacy guarantees. Video analytics has a symbiotic relationship with edge compute infrastructure. Edge computing makes compute resources available closer to the data sources (i.e., cameras). All aspects of video analytics call to be designed \u201cgreen-field\u201d, from vision algorithms, to the systems processing stack and networking links, and hybrid edge-cloud infrastructure. Such a holistic design will enable the democratization of live video analytics such that any organization with cameras can obtain value from video analytics.<\/span>"},{"id":1,"name":"Call for Papers","content":"<strong>Topics of Interest:<\/strong>\r\nThis workshop calls for research on various issues and solutions that can enable live video analytics with the role for edge computing. Topics of interest include (but not limited to) the following:\r\n<ul>\r\n \t<li>Low-cost video analytics<\/li>\r\n \t<li>Deployment experience with large array of cameras<\/li>\r\n \t<li>Storage of video data and metadata<\/li>\r\n \t<li>Interactive querying of video streams<\/li>\r\n \t<li>Network design for video streams<\/li>\r\n \t<li>Hybrid cloud architectures for video processing<\/li>\r\n \t<li>Scheduling for multi-tenant video processing<\/li>\r\n \t<li>Training of vision neural networks<\/li>\r\n \t<li>Edge-based processor architectures for video processing<\/li>\r\n \t<li>Energy-efficient system design for video analytics<\/li>\r\n \t<li>Intelligent camera designs<\/li>\r\n \t<li>Vehicular and drone-based video analytics<\/li>\r\n \t<li>Tools and datasets for video analytics systems<\/li>\r\n \t<li>Novel vision applications<\/li>\r\n \t<li>Video analytics for social good<\/li>\r\n \t<li>Secure processing of video analytics<\/li>\r\n \t<li>Privacy-preserving techniques for video processing<\/li>\r\n<\/ul>\r\n<strong>Submission Instructions:<\/strong>\r\nSubmissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be no longer than five (5) pages, including all figures, tables, followed by as many pages as necessary for bibliographic references. Submissions should be in two-column 10pt ACM format with authors names and affiliations for single-blind peer review. The workshop also solicits the submission of research, platform, and product demonstrations. Demo submission should be a summary or extended abstract describing the research to be presented, maximum one (1) page with font no smaller than 10 point size, in PDF file format. Demo submission title should begin with \"Demo:\".\r\n\r\nAuthors of accepted papers are expected to present their work at the workshop. Papers accepted for presentation will be published in the MobiCom Workshop Proceedings, and available at the ACM Digital Library. You may find these <a href=\"https:\/\/sigmobile.org\/mobicom\/2018\/files\/sig-alternate-10pt.cls\">LaTeX<\/a> and <a href=\"https:\/\/sigmobile.org\/mobicom\/2018\/files\/word-acm-10pt-on-12pt-7.0x9.25.doc\">MS-Word<\/a> templates useful in complying with the above requirements.\r\n\r\nSubmit your paper at <a href=\"https:\/\/hotedgevideo19.hotcrp.com\/\">https:\/\/hotedgevideo19.hotcrp.com\/<\/a>.\r\n\r\n<strong>Important Dates:<\/strong>\r\nPaper Submissions Deadline: <strong>July 9, 2019<\/strong>\r\nAcceptance Notification: <strong>July 31, 2019<\/strong>\r\nCamera-ready Papers Due: <strong>August 15, 2019<\/strong>\r\nWorkshop Date: <strong>October 21, 2019<\/strong>\r\n\r\n&nbsp;"},{"id":2,"name":"Organizers","content":"Program Committee:\r\n<ul>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ga\">Ganesh Ananthanarayanan<\/a>, Microsoft Research (co-chair)<\/li>\r\n \t<li><a href=\"https:\/\/apollo.smu.edu.sg\/\">Rajesh Balan<\/a>, Singapore Management University<\/li>\r\n \t<li><a href=\"http:\/\/www.winlab.rutgers.edu\/~gruteser\">Marco Gruester<\/a>, Rutgers University<\/li>\r\n \t<li><a href=\"https:\/\/people.cs.uchicago.edu\/~junchenj\">Junchen Jiang<\/a>, University of Chicago<\/li>\r\n \t<li><a href=\"http:\/\/www.roblkw.com\">Robert LiKamWa<\/a>, Arizona State University<\/li>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yunliu\">Yunxin Liu<\/a>, Microsoft Research (co-chair)<\/li>\r\n \t<li><a href=\"https:\/\/www.cc.gatech.edu\/~rama\">Kishore Ramachandran<\/a>, Georgia Tech<\/li>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yushu\">Yuanchao Shu<\/a>, Microsoft Research (co-chair)<\/li>\r\n \t<li><a href=\"http:\/\/people.cs.uchicago.edu\/~htzheng\">Heather Zheng<\/a>, University of Chicago<\/li>\r\n<\/ul>"},{"id":3,"name":"Program","content":"<font color=\"red\">09:00 - 09:10<\/font> Opening remarks\r\n\r\n<font color=\"red\">09:10 - 10:10<\/font> Keynote I - <b>Deep Learning in Mobile Systems, Experiences and Pitfalls<\/b>\r\n<a href=\"https:\/\/people.cs.uchicago.edu\/~htzheng\/\">Prof. Heather Zheng<\/a>, University of Chicago\r\n\r\nDeep learning (neural networks) is being rapidly adopted by (mobile) researchers and companies to solve a wide range of computational problems.  But is it a panacea to all the (traditionally hard) problems?   In this talk, I will share experiences from my lab on applying today\u2019s deep learning models to mobile systems design, and discuss vulnerabilities inherent in many existing deep learning models that make them easy to compromise, as well as potential defenses. \r\n\r\n<u>Bio:<\/u> Dr. Heather Zheng is the Neubauer Professor of Computer Science at University of Chicago. She received her PhD in Electrical and Computer Engineering from University of Maryland, College Park in 1999. She joined University of Chicago after spending 6 years in industry labs (Bell-Labs, NJ and Microsoft Research Asia), and 12 years at University of California at Santa Barbara. At UChicago, she co-directs the SAND Lab (Systems, Algorithms, Networking and Data).  She is an IEEE Fellow, World Technology Network Fellow, and recipient of MIT Technology Review's TR-35 Award (Young Innovators under 35),  Bell-Labs President\u2019s Gold award, and Google Faculty award. Her work has been covered by media outlets such as Scientific American, New York Times, Boston Globe, LA Times, and MIT Tech Review.  She served as PC co-chair for MobiCom and DySPAN,  and is the general co-chair for Hotnets 2020.  She is on the steering committee of MobiCom, and is the chair of SIGMOBILE Highlights committee. \r\n\r\n\r\n<font color=\"red\">10:10 - 10:40<\/font> Break\r\n\r\n<hr>\r\n\r\n<font color=\"red\">10:40 - 11:40<\/font> Session 1 - <i>Cameras are becoming smarter<\/i>\r\n\r\n<strong>Networked Cameras Are the New Big Data Clusters<\/strong>\r\nJunchen Jiang, Yuhao Zhou (University of Chicago), Ganesh Ananthanarayanan, Yuanchao Shu (Microsoft Research), Andrew A. Chien (University of Chicago)\r\n\r\n<strong>Live Video Analytics with FPGA-based Smart Cameras<\/strong>\r\nShang Wang (University of Electronic Science and Technology of China, Microsoft Research), Chen Zhang, Yuanchao Shu, Yunxin Liu (Microsoft Research)\r\n\r\n<strong>Space-Time Vehicle Tracking at the Edge of the Network<\/strong>\r\nZhuangdi Xu, Kishore Ramachandran (Georgia Tech), Sayan Sinha (Indian Institute of Technology Kharagpur)\r\n\r\n<font color=\"red\">11:40 - 13:00<\/font> Lunch\r\n\r\n<hr>\r\n\r\n<font color=\"red\">13:00 - 14:00<\/font> Keynote II - <b>360\u25e6 and 4K Video Streaming for Mobile Devices<\/b>\r\n<a href=\"https:\/\/www.cs.utexas.edu\/~lili\/\">Prof. Lili Qiu<\/a>, University of Texas at Austin\r\n\r\nThe popularity of 360\u25e6 and\/or 4K videos has grown rapidly due to the immersive user experience. 360\u25e6 videos are displayed as a panorama and the view automatically adapts with the head movement. Existing systems stream 360\u25e6 videos in a similar way as regular videos, where all data of the panoramic view is transmitted. This is wasteful since a user only views a small portion of the 360\u25e6 view. To save bandwidth, recent works propose the tile-based streaming, which divides the panoramic view to multiple smaller sized tiles and streams only the tiles within a user\u2019s field of view (FoV) predicted based on the recent head position. Interestingly, the tile-based streaming has only been simulated or implemented on desktops. We find that it cannot run in real-time even on the latest smartphone (e.g., Samsung S7, Samsung S8 and Huawei Mate 9) due to hardware and software limitations. Moreover, it results in significant video quality degradation due to head movement prediction error, which is hard to avoid. Motivated by these observations, we develop a novel tile-based layered approach to stream 360\u25e6 content on smartphones to avoid bandwidth wastage while maintaining high video quality.\r\n\r\nNext we explore the feasibility of supporting live 4K video streaming over wireless networks using commodity devices. Coding and streaming live 4K videos incurs prohibitive cost to the network and end system. We propose a novel system, which consists of (i) easy-to-compute layered video coding to seamlessly adapt to unpredictable wireless link fluctuations, (ii) efficient GPU implementation of video coding on commodity devices, and (iii) effectively leveraging both WiFi and WiGig through delayed video adaptation and smart scheduling.  Using real experiments and emulation, we demonstrate the feasibility and effectiveness of our system.\r\n\r\n<u>Bio:<\/u> Lili Qiu is a Professor at Computer Science Dept. in UT Austin. She received a Ph.D. in Computer Science from Cornell University in 2001. She was a researcher at Microsoft Research (Redmond, WA) in 2001 -- 2004. She joined UT Austin in 2005. She is named IEEE Fellow, ACM Fellow, and ACM Distinguished Scientist. She has also received a NSF CAREER Award, Google Faculty Research Award, and best paper awards in Mobisys\u201918 and ICNP\u201917.  \r\n\r\n<font color=\"red\">14:00 - 14:30<\/font> Break\r\n\r\n<hr>\r\n\r\n<font color=\"red\">14:30 - 15:30<\/font> Session 2 - <i>ML for videos<\/i>\r\n\r\n<strong>Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems<\/strong>\r\nYoshitomo Matsubara, Sabur Hassan Baidya, Davide Callegaro, Marco Levorato, Sameer Singh (University of California, Irvine)\r\n\r\n<strong>Cracking open the DNN black-box: Video Analytics with DNNs across the Camera-Cloud Boundary<\/strong>\r\nJohn Emmons, Sadjad Fouladi (Stanford University), Ganesh Ananthanarayanan (Microsoft Research), Shivaram Venkataraman (University of Wisconsin-Madison), Silvio Savarese, Keith Winstein (Stanford University)\r\n\r\n<strong>secGAN: A Cycle-Consistent GAN for Securely-Recoverable Video Transformation<\/strong>\r\nHao Wu, Jinghao Feng, Xuejin Tian, Fengyuan Xu, Sheng Zhong (Nanjing University), Yunxin Liu (Microsoft Research), XiaoFeng Wang (Indiana University Bloomington)\r\n\r\n<font color=\"red\">15:00 - 15:30<\/font> Break\r\n\r\n<hr>\r\n\r\n<font color=\"red\">15:30 - 16:10<\/font> Session 3 - <i>Playing nice with the network<\/i>\r\n\r\n<strong>Client-side Bandwidth Estimation Technique for Adaptive Streaming of a Browser Based Free-Viewpoint Application<\/strong>\r\nTilak Varisetty, David Dietrich (Leibniz Universit\u00e4t Hannover)\r\n\r\n<strong>Sensor Training Data Reduction for Autonomous Vehicles<\/strong>\r\nMatthew Tomei, Alex Schwing, (University of Illinois at Urbana-Champaign), Satish Narayanasamy (University of Michigan), Rakesh Kumar (University of Illinois at Urbana-Champaign)\r\n\r\n<font color=\"red\">16:10 - 17:00<\/font> Poster and demo session"}],"msr_startdate":"2019-10-21","msr_enddate":"2019-10-21","msr_event_time":"","msr_location":"Los Cabos, Mexico","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"October 21, 2019","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"<img width=\"960\" height=\"360\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/los_cabos_header2.jpg\" class=\"img-object-cover\" alt=\"a rocky island in the middle of a body of water with Arch of Cabo San Lucas in the background\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/los_cabos_header2.jpg 1920w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/los_cabos_header2-300x113.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/los_cabos_header2-768x288.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/los_cabos_header2-1024x384.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/los_cabos_header2-1600x600.jpg 1600w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","event_excerpt":"Cameras are everywhere!\u00a0Analyzing live videos from these cameras has great potential to impact science and society.\u00a0Enterprise cameras are deployed for a wide variety of commercial and security reasons. Consumer devices themselves have cameras with users interested in analyzing live videos from these devices. We are all living in the golden era for computer vision and AI that is being fueled by game-changing systemic infrastructure advancements, breakthroughs in machine learning, and copious training data, largely improving&hellip;","msr_research_lab":[],"related-researchers":[{"type":"user_nicename","display_name":"Ganesh Ananthanarayanan","user_id":31834,"people_section":"Section name 2","alias":"ga"}],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/585271","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":22,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/585271\/revisions"}],"predecessor-version":[{"id":1146986,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/585271\/revisions\/1146986"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/585295"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=585271"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=585271"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=585271"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=585271"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=585271"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=585271"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=585271"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=585271"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=585271"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}