{"id":731629,"date":"2021-03-08T12:23:34","date_gmt":"2021-03-08T20:23:34","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&#038;p=731629"},"modified":"2025-08-06T11:51:43","modified_gmt":"2025-08-06T18:51:43","slug":"the-3rd-workshop-on-hot-topics-in-video-analytics-and-intelligent-edges","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/the-3rd-workshop-on-hot-topics-in-video-analytics-and-intelligent-edges\/","title":{"rendered":"The 3rd Workshop on Hot Topics in Video Analytics and Intelligent Edges"},"content":{"rendered":"\n\n<p>(in conjunction with\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.sigmobile.org\/mobicom\/2021\/\">ACM MobiCom 2021<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>)<\/p>\n<p><strong>Paper Submissions Deadline:<\/strong> <strong><del>May 21<\/del> June 4, 2021<\/strong><\/p>\n<p><strong>Submission Site:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/hotedgevideo21.hotcrp.com\/\">https:\/\/hotedgevideo21.hotcrp.com\/<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><strong>CFP<\/strong><strong>: <\/strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/03\/HotEdgeVideoFlyer.pdf\">HotEdgeVideo21.pdf<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p class=\"\"><strong>Past Workshops:<\/strong><br \/>\n<a target=\"_blank\" class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/aka.ms\/hotedgevideo20\">HotEdgeVideo 2020<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a target=\"_blank\" class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/aka.ms\/hotedgevideo19\">HotEdgeVideo 2019<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><span data-contrast=\"auto\">We are all living in the golden era of AI that is being fueled by game-changing systemic infrastructure advancements. Among numerous applications, video analytics in particular, has shown tremendous\u00a0potential to impact science and society due to breakthroughs in machine learning, copious training data, and pervasive deployment of video capture devices. <\/span><\/p>\n<p><span data-contrast=\"auto\">Analyzing live video streams is arguably the most challenging of domains for \u201csystems-for-AI\u201d. Unlike text or numeric processing, video analytics require higher bandwidth, consume considerable compute cycles for processing, necessitate richer query semantics, and demand tighter security & privacy guarantees. Video analytics has a symbiotic relationship with edge compute infrastructure. Edge computing makes compute resources available closer to the data sources (e.g., cameras and smartphones). All aspects of video analytics call to be designed \u201cgreen-field\u201d, from vision algorithms, to the systems processing stack and networking links, and hybrid edge-cloud infrastructure. Such a holistic design will enable the democratization of live video analytics such that any organization with cameras can obtain value from video analytics.<\/span><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>This workshop calls for research on various issues and solutions that can enable live video analytics with the role for edge computing. Topics of interest include (but not limited to) the following:<\/p>\n<ul>\n<li>Low-cost video analytics<\/li>\n<li>Deployment experience with large array of cameras<\/li>\n<li>Storage of video data and metadata<\/li>\n<li>Interactive querying of video streams<\/li>\n<li>Network design for video streams<\/li>\n<li>Hybrid cloud architectures for video processing<\/li>\n<li>Scheduling for multi-tenant video processing<\/li>\n<li>Training of vision neural networks<\/li>\n<li>Edge-based processor architectures for video processing<\/li>\n<li>Energy-efficient system design for video analytics<\/li>\n<li>Intelligent camera designs<\/li>\n<li>Vehicular and drone-based video analytics<\/li>\n<li>Tools and datasets for video analytics systems<\/li>\n<li>Novel vision applications<\/li>\n<li>Video analytics for social good<\/li>\n<li>Secure processing of video analytics<\/li>\n<li>Privacy-preserving techniques for video processing<\/li>\n<li>Emerging forms of immersive video streams, e.g., 360-degree or volumetric video<\/li>\n<\/ul>\n<p><strong>Submission Instructions:<\/strong><br \/>\nSubmissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be no longer than five (5) pages, including all figures, tables, followed by as many pages as necessary for bibliographic references. Submissions should be in two-column 10pt ACM format with authors names and affiliations for single-blind peer review. The workshop also solicits the submission of research, platform, and product demonstrations. Demo submission should be a summary or extended abstract describing the research to be presented, maximum one (1) page with font no smaller than 10 point size, in PDF file format. Demo submission title should begin with \u201cDemo:\u201d.<\/p>\n<p>Authors of accepted papers are expected to present their work at the workshop. Papers accepted for presentation will be published in the MobiCom Workshop Proceedings, and available at the ACM Digital Library. You may find these <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.acm.org\/publications\/proceedings-template\">templates<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> useful in complying with formatting requirements.<\/p>\n<p><strong>Submission site:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/hotedgevideo21.hotcrp.com\/\">https:\/\/hotedgevideo21.hotcrp.com\/<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n<p><strong>Important Dates:<\/strong><br \/>\nPaper Submissions Deadline:<strong style=\"color: red\"><del> May 21<\/del> June 4, 2021<\/strong><br \/>\nAcceptance Notification: <strong>June 30, 2021<\/strong><br \/>\nCamera-ready Papers Due:\u00a0<strong>July 31, 2021<\/strong><br \/>\nWorkshop Date: <strong>Jan. 31, 2022<\/strong><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p class=\"\"><strong>Program Committee:<\/strong><\/p>\n<ul>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ga\">Ganesh Ananthanarayanan<\/a>, Microsoft<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.cc.gatech.edu\/~jarulraj\/\">Joy Arulraj<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Georgia Institute of Technology<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/engineering.purdue.edu\/~ychu\/\">Y. Charlie Hu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Purdue University<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/people.cs.uchicago.edu\/~junchenj\">Junchen Jiang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, University of Chicago (co-chair)<\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nikarian\/\">Nikolaos Karianakis<\/a>, Microsoft<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/cse.snu.ac.kr\/en\/professor\/youngki-lee\">Youngki Lee<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Seoul National University<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/meteor.ame.asu.edu\/\">Robert LiKamWa<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Arizona State University (co-chair)<\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yushu\">Yuanchao Shu<\/a>, Microsoft (co-chair)<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><strong>The workshop will be held in a hybrid mode on March 28th. All times below are in Central Standard Time.<\/strong><\/p>\n<p><span style=\"color: red\">08:00 \u2013 08:10<\/span> Opening remarks<\/p>\n<p><span style=\"color: red\">08:10 \u2013 09:10<\/span> Keynote I &#8211; <strong>Life-immersive AI for Family Interaction across Space and Time<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.inseokhwang.com\/\">Inseok Hwang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, POSTECH<\/p>\n<p>Computer-mediated interaction services connect people over a distance, and often enrich it further with various media sharing. However, we address that people in such an interaction are often \u2018locked in a frame\u2019\u2014which includes an interaction mode, a point in time, or a context of either person. We observe that such lock-ins make it difficult to shape the interaction to be mutually symmetric and empathetic to each other. In this talk, I will present a semantic-equivalent melding of space and time to create a new form of empathetic family interaction. As initial attempts, I will introduce two prototype systems, HomeMeld and MomentMeld, which aims to meld space and time by applying AI, respectively. HomeMeld provides a sense of living together to a family living apart with AI-driven autonomous robotic avatars navigating semantic-equivalent location of the other in one\u2019s house. MomentMeld utilizes an ensemble of visual AI to expand the area of interaction topic from the present, by matching semantic-equivalent photos of each other taken in different times. In-the-wild experiments reveal that HomeMeld and MomentMeld open new forms of empathetic family interaction by computationally melding space and time.<\/p>\n<p><strong>Bio:<\/strong> Inseok Hwang is an Assistant Professor of the Department of Computer Science and Engineering at POSTECH. Before joining POSTECH in 2020, he spent six years as a Research Staff Member at IBM Research in Austin, Texas since 2014. His main research theme lies in \u201cintelligent computing infusing real-life\u201d, with special focuses on mobile computing, human-centered systems, and applied AI. He is a recipient of the Best Paper Award from ACM CSCW and multiple Best Demo Awards from ACM MobiSys. He has been actively serving on technical program committees and editorial boards of premier venues in mobile and human-centered computing. He is a prolific inventor of 89 U.S. patents issued to date, in recognition of which he was appointed as an IBM Master Inventor. He obtained his Ph.D. in Computer Science from KAIST in 2013.<\/p>\n<p><span style=\"color: red\">09:10 \u2013 09:40<\/span> Break<\/p>\n<p><span style=\"color: red\">09:40 \u2013 10:40<\/span> Session 1<\/p>\n<p><strong>Towards Memory-Efficient Inference in Edge Video Analytics<\/strong><br \/>\nArthi Padmanabhan (Microsoft & UCLA), Anand Padmanabha Iyer (Microsoft), Ganesh Ananthanarayanan (Microsoft), Yuanchao Shu (Microsoft), Nikolaos Karianakis (Microsoft), Guoqing Harry Xu (UCLA), Ravi Netravali (Princeton University)<\/p>\n<p><strong>Decentralized Modular Architecture for Live Video Analytics at the Edge<\/strong><br \/>\nSri Pramodh Rachuri (Stony Brook University), Francesco Bronzino (Universit\u00e9 Savoie Mont Blanc), Shubham Jain (Stony Brook University)<\/p>\n<p><strong>The Case for Admission Control of Mobile Cameras into the Live Video Analytics Pipeline<\/strong><br \/>\nFrancescomaria Faticanti (Fondazione Bruno Kessler & University of Trento), Francesco Bronzino (Universit\u00e9 Savoie Mont Blanc), Francesco De Pellegrini (University of Avignon)<\/p>\n<p><span style=\"color: red\">10:40 \u2013 11:00<\/span> Break<\/p>\n<p><span style=\"color: red\">11:00 \u2013 11:40<\/span> Session 2<\/p>\n<p><strong>Enabling High Frame-rate UHD Real-time Communication with Frame-Skipping<\/strong><br \/>\nTingfeng Wang (Beijing University of Posts and Telecommunications), Zili Meng (Tsinghua University), Mingwei Xu (Tsinghua University), Rui Han (Tencent), Honghao Liu (Tencent)<\/p>\n<p><strong>Characterizing Real-Time Dense Point Cloud Capture and Streaming on Mobile Devices<\/strong><br \/>\nJinhan Hu (Arizona State University), Aashiq Shaikh (Arizona State University), Alireza Bahremand (Arizona State University), Robert LiKamWa (Arizona State University)<\/p>\n<p><span style=\"color: red\">11:40 \u2013 13:00<\/span> Lunch<\/p>\n<p><span style=\"color: red\">13:00 \u2013 14:00<\/span> Keynote II &#8211; <strong>TSM: Temporal Shift Module for Efficient and Scalable Video Understanding on Edge Devices<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/songhan.mit.edu\/\">Song Han<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, MIT<\/p>\n<p>Today\u2019s AI is too big. Deep neural networks demand extraordinary levels of data and computation, and therefore power, for training and inference. This severely limits the practical deployment of AI in edge devices. The explosive growth of video requires video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive. We propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance for video understanding. The key idea of TSM is to shift part of the channels along the temporal dimension, thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. TSM achieves a high frame rate of 74fps and 29fps for online video recognition on Jetson Nano and mobile phone. TSM has higher scalability compared to 3D networks, enabling large-scale Kinetics training in 15 minutes. We hope such TinyML techniques can make video understanding smaller, faster, more efficient for both training and deployment.<\/p>\n<p><strong>Bio:<\/strong> Song Han is an assistant professor at MIT\u2019s EECS. He received his PhD degree from Stanford University. His research focuses on efficient deep learning computing. He proposed \u201cdeep compression\u201d technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation \u201cefficient inference engine\u201d that first exploited pruning and weight sparsity in deep learning accelerators. His team\u2019s work on hardware-aware neural architecture search that bring deep learning to IoT devices was highlighted by MIT News, Wired, Qualcomm News, VentureBeat, IEEE Spectrum, integrated in PyTorch and AutoGluon, and received many low-power computer vision contest awards in flagship AI conferences (CVPR\u201919, ICCV\u201919 and NeurIPS\u201919). Song received Best Paper awards at ICLR\u201916 and FPGA\u201917, Amazon Machine Learning Research Award, SONY Faculty Award, Facebook Faculty Award, NVIDIA Academic Partnership Award. Song was named \u201c35 Innovators Under 35\u201d by MIT Technology Review for his contribution on \u201cdeep compression\u201d technique that \u201clets powerful artificial intelligence (AI) programs run more efficiently on low-power mobile devices.\u201d Song received the NSF CAREER Award for \u201cefficient algorithms and hardware for accelerated machine learning\u201d and the IEEE \u201cAIs 10 to Watch: The Future of AI\u201d award.<\/p>\n<p><span style=\"color: red\">14:00 \u2013 <\/span><span style=\"color: red\">14:40<\/span> Session 3<\/p>\n<p><strong>Auto-SDA: Automated Video-based Social Distancing Analyzer<\/strong><br \/>\nMahshid Ghasemi (Columbia University), Zoran Kostic (Columbia University), Javad Ghaderi (Columbia University), Gil Zussman (Columbia University)<\/p>\n<p><strong>Demo: Cost Effective Processing of Detection-driven Video Analytics at the Edge<\/strong><br \/>\nMd Adnan Arefeen (University of Missouri Kansas City), Md Yusuf Sarwar Uddin (University of Missouri-Kansas City)<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The 3rd Workshop on Hot Topics in Video Analytics and Intelligent Edges will be hosted in conjunction with\u00a0ACM MobiCom 2021 on October 25, 2021.<\/p>\n","protected":false},"featured_media":731647,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2022-03-28","msr_enddate":"2022-03-28","msr_location":"Hybrid","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":true,"msr_private_event":false,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13556,13547],"msr-region":[256048],"msr-event-type":[210063],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-731629","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-research-area-systems-and-networking","msr-region-global","msr-event-type-workshop","msr-locale-en_us"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"The 3rd Workshop on Hot Topics in Video Analytics and Intelligent Edges\",\"backgroundColor\":\"grey\",\"image\":{\"id\":731647,\"url\":\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/03\/banner.png\",\"alt\":\"\"},\"imageType\":\"full-bleed\"} \/-->\n\n<!-- wp:msr\/content-tabs --><!-- wp:msr\/content-tab {\"title\":\"About\"} --><!-- wp:freeform --><p>(in conjunction with\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.sigmobile.org\/mobicom\/2021\/\">ACM MobiCom 2021<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>)<\/p>\n<p><strong>Paper Submissions Deadline:<\/strong> <strong><del>May 21<\/del> June 4, 2021<\/strong><\/p>\n<p><strong>Submission Site:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/hotedgevideo21.hotcrp.com\/\">https:\/\/hotedgevideo21.hotcrp.com\/<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><strong>CFP<\/strong><strong>: <\/strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/03\/HotEdgeVideoFlyer.pdf\">HotEdgeVideo21.pdf<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p class=\"\"><strong>Past Workshops:<\/strong><br \/>\n<a target=\"_blank\" class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/aka.ms\/hotedgevideo20\">HotEdgeVideo 2020<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a target=\"_blank\" class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/aka.ms\/hotedgevideo19\">HotEdgeVideo 2019<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><span data-contrast=\"auto\">We are all living in the golden era of AI that is being fueled by game-changing systemic infrastructure advancements. Among numerous applications, video analytics in particular, has shown tremendous\u00a0potential to impact science and society due to breakthroughs in machine learning, copious training data, and pervasive deployment of video capture devices. <\/span><\/p>\n<p><span data-contrast=\"auto\">Analyzing live video streams is arguably the most challenging of domains for \u201csystems-for-AI\u201d. Unlike text or numeric processing, video analytics require higher bandwidth, consume considerable compute cycles for processing, necessitate richer query semantics, and demand tighter security &amp; privacy guarantees. Video analytics has a symbiotic relationship with edge compute infrastructure. Edge computing makes compute resources available closer to the data sources (e.g., cameras and smartphones). All aspects of video analytics call to be designed \u201cgreen-field\u201d, from vision algorithms, to the systems processing stack and networking links, and hybrid edge-cloud infrastructure. Such a holistic design will enable the democratization of live video analytics such that any organization with cameras can obtain value from video analytics.<\/span><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Call for Papers\"} --><!-- wp:freeform --><p>This workshop calls for research on various issues and solutions that can enable live video analytics with the role for edge computing. Topics of interest include (but not limited to) the following:<\/p>\n<ul>\n<li>Low-cost video analytics<\/li>\n<li>Deployment experience with large array of cameras<\/li>\n<li>Storage of video data and metadata<\/li>\n<li>Interactive querying of video streams<\/li>\n<li>Network design for video streams<\/li>\n<li>Hybrid cloud architectures for video processing<\/li>\n<li>Scheduling for multi-tenant video processing<\/li>\n<li>Training of vision neural networks<\/li>\n<li>Edge-based processor architectures for video processing<\/li>\n<li>Energy-efficient system design for video analytics<\/li>\n<li>Intelligent camera designs<\/li>\n<li>Vehicular and drone-based video analytics<\/li>\n<li>Tools and datasets for video analytics systems<\/li>\n<li>Novel vision applications<\/li>\n<li>Video analytics for social good<\/li>\n<li>Secure processing of video analytics<\/li>\n<li>Privacy-preserving techniques for video processing<\/li>\n<li>Emerging forms of immersive video streams, e.g., 360-degree or volumetric video<\/li>\n<\/ul>\n<p><strong>Submission Instructions:<\/strong><br \/>\nSubmissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be no longer than five (5) pages, including all figures, tables, followed by as many pages as necessary for bibliographic references. Submissions should be in two-column 10pt ACM format with authors names and affiliations for single-blind peer review. The workshop also solicits the submission of research, platform, and product demonstrations. Demo submission should be a summary or extended abstract describing the research to be presented, maximum one (1) page with font no smaller than 10 point size, in PDF file format. Demo submission title should begin with \u201cDemo:\u201d.<\/p>\n<p>Authors of accepted papers are expected to present their work at the workshop. Papers accepted for presentation will be published in the MobiCom Workshop Proceedings, and available at the ACM Digital Library. You may find these <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.acm.org\/publications\/proceedings-template\">templates<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> useful in complying with formatting requirements.<\/p>\n<p><strong>Submission site:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/hotedgevideo21.hotcrp.com\/\">https:\/\/hotedgevideo21.hotcrp.com\/<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n<p><strong>Important Dates:<\/strong><br \/>\nPaper Submissions Deadline:<strong style=\"color: red\"><del> May 21<\/del> June 4, 2021<\/strong><br \/>\nAcceptance Notification: <strong>June 30, 2021<\/strong><br \/>\nCamera-ready Papers Due:\u00a0<strong>July 31, 2021<\/strong><br \/>\nWorkshop Date: <strong>Jan. 31, 2022<\/strong><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Organizers\"} --><!-- wp:freeform --><p class=\"\"><strong>Program Committee:<\/strong><\/p>\n<ul>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ga\">Ganesh Ananthanarayanan<\/a>, Microsoft<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.cc.gatech.edu\/~jarulraj\/\">Joy Arulraj<\/a>, Georgia Institute of Technology<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/engineering.purdue.edu\/~ychu\/\">Y. Charlie Hu<\/a>, Purdue University<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/people.cs.uchicago.edu\/~junchenj\">Junchen Jiang<\/a>, University of Chicago (co-chair)<\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nikarian\/\">Nikolaos Karianakis<\/a>, Microsoft<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/cse.snu.ac.kr\/en\/professor\/youngki-lee\">Youngki Lee<\/a>, Seoul National University<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/meteor.ame.asu.edu\/\">Robert LiKamWa<\/a>, Arizona State University (co-chair)<\/li>\n<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yushu\">Yuanchao Shu<\/a>, Microsoft (co-chair)<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Program\"} --><!-- wp:freeform --><p><strong>The workshop will be held in a hybrid mode on March 28th. All times below are in Central Standard Time.<\/strong><\/p>\n<p><span style=\"color: red\">08:00 \u2013 08:10<\/span> Opening remarks<\/p>\n<p><span style=\"color: red\">08:10 \u2013 09:10<\/span> Keynote I &#8211; <strong>Life-immersive AI for Family Interaction across Space and Time<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.inseokhwang.com\/\">Inseok Hwang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, POSTECH<\/p>\n<p>Computer-mediated interaction services connect people over a distance, and often enrich it further with various media sharing. However, we address that people in such an interaction are often \u2018locked in a frame\u2019\u2014which includes an interaction mode, a point in time, or a context of either person. We observe that such lock-ins make it difficult to shape the interaction to be mutually symmetric and empathetic to each other. In this talk, I will present a semantic-equivalent melding of space and time to create a new form of empathetic family interaction. As initial attempts, I will introduce two prototype systems, HomeMeld and MomentMeld, which aims to meld space and time by applying AI, respectively. HomeMeld provides a sense of living together to a family living apart with AI-driven autonomous robotic avatars navigating semantic-equivalent location of the other in one\u2019s house. MomentMeld utilizes an ensemble of visual AI to expand the area of interaction topic from the present, by matching semantic-equivalent photos of each other taken in different times. In-the-wild experiments reveal that HomeMeld and MomentMeld open new forms of empathetic family interaction by computationally melding space and time.<\/p>\n<p><strong>Bio:<\/strong> Inseok Hwang is an Assistant Professor of the Department of Computer Science and Engineering at POSTECH. Before joining POSTECH in 2020, he spent six years as a Research Staff Member at IBM Research in Austin, Texas since 2014. His main research theme lies in \u201cintelligent computing infusing real-life\u201d, with special focuses on mobile computing, human-centered systems, and applied AI. He is a recipient of the Best Paper Award from ACM CSCW and multiple Best Demo Awards from ACM MobiSys. He has been actively serving on technical program committees and editorial boards of premier venues in mobile and human-centered computing. He is a prolific inventor of 89 U.S. patents issued to date, in recognition of which he was appointed as an IBM Master Inventor. He obtained his Ph.D. in Computer Science from KAIST in 2013.<\/p>\n<p><span style=\"color: red\">09:10 \u2013 09:40<\/span> Break<\/p>\n<p><span style=\"color: red\">09:40 \u2013 10:40<\/span> Session 1<\/p>\n<p><strong>Towards Memory-Efficient Inference in Edge Video Analytics<\/strong><br \/>\nArthi Padmanabhan (Microsoft &amp; UCLA), Anand Padmanabha Iyer (Microsoft), Ganesh Ananthanarayanan (Microsoft), Yuanchao Shu (Microsoft), Nikolaos Karianakis (Microsoft), Guoqing Harry Xu (UCLA), Ravi Netravali (Princeton University)<\/p>\n<p><strong>Decentralized Modular Architecture for Live Video Analytics at the Edge<\/strong><br \/>\nSri Pramodh Rachuri (Stony Brook University), Francesco Bronzino (Universit\u00e9 Savoie Mont Blanc), Shubham Jain (Stony Brook University)<\/p>\n<p><strong>The Case for Admission Control of Mobile Cameras into the Live Video Analytics Pipeline<\/strong><br \/>\nFrancescomaria Faticanti (Fondazione Bruno Kessler &amp; University of Trento), Francesco Bronzino (Universit\u00e9 Savoie Mont Blanc), Francesco De Pellegrini (University of Avignon)<\/p>\n<p><span style=\"color: red\">10:40 \u2013 11:00<\/span> Break<\/p>\n<p><span style=\"color: red\">11:00 \u2013 11:40<\/span> Session 2<\/p>\n<p><strong>Enabling High Frame-rate UHD Real-time Communication with Frame-Skipping<\/strong><br \/>\nTingfeng Wang (Beijing University of Posts and Telecommunications), Zili Meng (Tsinghua University), Mingwei Xu (Tsinghua University), Rui Han (Tencent), Honghao Liu (Tencent)<\/p>\n<p><strong>Characterizing Real-Time Dense Point Cloud Capture and Streaming on Mobile Devices<\/strong><br \/>\nJinhan Hu (Arizona State University), Aashiq Shaikh (Arizona State University), Alireza Bahremand (Arizona State University), Robert LiKamWa (Arizona State University)<\/p>\n<p><span style=\"color: red\">11:40 \u2013 13:00<\/span> Lunch<\/p>\n<p><span style=\"color: red\">13:00 \u2013 14:00<\/span> Keynote II &#8211; <strong>TSM: Temporal Shift Module for Efficient and Scalable Video Understanding on Edge Devices<\/strong><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/songhan.mit.edu\/\">Song Han<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, MIT<\/p>\n<p>Today\u2019s AI is too big. Deep neural networks demand extraordinary levels of data and computation, and therefore power, for training and inference. This severely limits the practical deployment of AI in edge devices. The explosive growth of video requires video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive. We propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance for video understanding. The key idea of TSM is to shift part of the channels along the temporal dimension, thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. TSM achieves a high frame rate of 74fps and 29fps for online video recognition on Jetson Nano and mobile phone. TSM has higher scalability compared to 3D networks, enabling large-scale Kinetics training in 15 minutes. We hope such TinyML techniques can make video understanding smaller, faster, more efficient for both training and deployment.<\/p>\n<p><strong>Bio:<\/strong> Song Han is an assistant professor at MIT\u2019s EECS. He received his PhD degree from Stanford University. His research focuses on efficient deep learning computing. He proposed \u201cdeep compression\u201d technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation \u201cefficient inference engine\u201d that first exploited pruning and weight sparsity in deep learning accelerators. His team\u2019s work on hardware-aware neural architecture search that bring deep learning to IoT devices was highlighted by MIT News, Wired, Qualcomm News, VentureBeat, IEEE Spectrum, integrated in PyTorch and AutoGluon, and received many low-power computer vision contest awards in flagship AI conferences (CVPR\u201919, ICCV\u201919 and NeurIPS\u201919). Song received Best Paper awards at ICLR\u201916 and FPGA\u201917, Amazon Machine Learning Research Award, SONY Faculty Award, Facebook Faculty Award, NVIDIA Academic Partnership Award. Song was named \u201c35 Innovators Under 35\u201d by MIT Technology Review for his contribution on \u201cdeep compression\u201d technique that \u201clets powerful artificial intelligence (AI) programs run more efficiently on low-power mobile devices.\u201d Song received the NSF CAREER Award for \u201cefficient algorithms and hardware for accelerated machine learning\u201d and the IEEE \u201cAIs 10 to Watch: The Future of AI\u201d award.<\/p>\n<p><span style=\"color: red\">14:00 \u2013 <\/span><span style=\"color: red\">14:40<\/span> Session 3<\/p>\n<p><strong>Auto-SDA: Automated Video-based Social Distancing Analyzer<\/strong><br \/>\nMahshid Ghasemi (Columbia University), Zoran Kostic (Columbia University), Javad Ghaderi (Columbia University), Gil Zussman (Columbia University)<\/p>\n<p><strong>Demo: Cost Effective Processing of Detection-driven Video Analytics at the Edge<\/strong><br \/>\nMd Adnan Arefeen (University of Missouri Kansas City), Md Yusuf Sarwar Uddin (University of Missouri-Kansas City)<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- \/wp:msr\/content-tabs -->","tab-content":[{"id":0,"name":"About","content":"<span data-contrast=\"auto\">We are all living in the golden era of AI that is being fueled by game-changing systemic infrastructure advancements. Among numerous applications, video analytics in particular, has shown tremendous\u00a0potential to impact science and society due to breakthroughs in machine learning, copious training data, and pervasive deployment of video capture devices. <\/span>\r\n\r\n<span data-contrast=\"auto\">Analyzing live video streams is arguably the most challenging of domains for \u201csystems-for-AI\u201d. Unlike text or numeric processing, video analytics require higher bandwidth, consume considerable compute cycles for processing, necessitate richer query semantics, and demand tighter security &amp; privacy guarantees. Video analytics has a symbiotic relationship with edge compute infrastructure. Edge computing makes compute resources available closer to the data sources (e.g., cameras and smartphones). All aspects of video analytics call to be designed \u201cgreen-field\u201d, from vision algorithms, to the systems processing stack and networking links, and hybrid edge-cloud infrastructure. Such a holistic design will enable the democratization of live video analytics such that any organization with cameras can obtain value from video analytics.<\/span>"},{"id":1,"name":"Call for Papers","content":"This workshop calls for research on various issues and solutions that can enable live video analytics with the role for edge computing. Topics of interest include (but not limited to) the following:\r\n<ul>\r\n \t<li>Low-cost video analytics<\/li>\r\n \t<li>Deployment experience with large array of cameras<\/li>\r\n \t<li>Storage of video data and metadata<\/li>\r\n \t<li>Interactive querying of video streams<\/li>\r\n \t<li>Network design for video streams<\/li>\r\n \t<li>Hybrid cloud architectures for video processing<\/li>\r\n \t<li>Scheduling for multi-tenant video processing<\/li>\r\n \t<li>Training of vision neural networks<\/li>\r\n \t<li>Edge-based processor architectures for video processing<\/li>\r\n \t<li>Energy-efficient system design for video analytics<\/li>\r\n \t<li>Intelligent camera designs<\/li>\r\n \t<li>Vehicular and drone-based video analytics<\/li>\r\n \t<li>Tools and datasets for video analytics systems<\/li>\r\n \t<li>Novel vision applications<\/li>\r\n \t<li>Video analytics for social good<\/li>\r\n \t<li>Secure processing of video analytics<\/li>\r\n \t<li>Privacy-preserving techniques for video processing<\/li>\r\n \t<li>Emerging forms of immersive video streams, e.g., 360-degree or volumetric video<\/li>\r\n<\/ul>\r\n<strong>Submission Instructions:<\/strong>\r\nSubmissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be no longer than five (5) pages, including all figures, tables, followed by as many pages as necessary for bibliographic references. Submissions should be in two-column 10pt ACM format with authors names and affiliations for single-blind peer review. The workshop also solicits the submission of research, platform, and product demonstrations. Demo submission should be a summary or extended abstract describing the research to be presented, maximum one (1) page with font no smaller than 10 point size, in PDF file format. Demo submission title should begin with \u201cDemo:\u201d.\r\n\r\nAuthors of accepted papers are expected to present their work at the workshop. Papers accepted for presentation will be published in the MobiCom Workshop Proceedings, and available at the ACM Digital Library. You may find these <a href=\"https:\/\/www.acm.org\/publications\/proceedings-template\">templates<\/a> useful in complying with formatting requirements.\r\n\r\n<strong>Submission site:<\/strong> <a href=\"https:\/\/hotedgevideo21.hotcrp.com\/\">https:\/\/hotedgevideo21.hotcrp.com\/<\/a>.\r\n\r\n<strong>Important Dates:<\/strong>\r\nPaper Submissions Deadline:<strong style=\"color: red\"><del> May 21<\/del> June 4, 2021<\/strong>\r\nAcceptance Notification: <strong>June 30, 2021<\/strong>\r\nCamera-ready Papers Due:\u00a0<strong>July 31, 2021<\/strong>\r\nWorkshop Date: <strong>Jan. 31, 2022<\/strong>"},{"id":2,"name":"Organizers","content":"<p class=\"\"><strong>Program Committee:<\/strong><\/p>\r\n\r\n<ul>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ga\">Ganesh Ananthanarayanan<\/a>, Microsoft<\/li>\r\n \t<li><a href=\"https:\/\/www.cc.gatech.edu\/~jarulraj\/\">Joy Arulraj<\/a>, Georgia Institute of Technology<\/li>\r\n \t<li><a href=\"https:\/\/engineering.purdue.edu\/~ychu\/\">Y. Charlie Hu<\/a>, Purdue University<\/li>\r\n \t<li><a href=\"https:\/\/people.cs.uchicago.edu\/~junchenj\">Junchen Jiang<\/a>, University of Chicago (co-chair)<\/li>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nikarian\/\">Nikolaos Karianakis<\/a>, Microsoft<\/li>\r\n \t<li><a href=\"https:\/\/cse.snu.ac.kr\/en\/professor\/youngki-lee\">Youngki Lee<\/a>, Seoul National University<\/li>\r\n \t<li><a href=\"https:\/\/meteor.ame.asu.edu\/\">Robert LiKamWa<\/a>, Arizona State University (co-chair)<\/li>\r\n \t<li><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yushu\">Yuanchao Shu<\/a>, Microsoft (co-chair)<\/li>\r\n<\/ul>"},{"id":3,"name":"Program","content":"<strong>The workshop will be held in a hybrid mode on March 28th. All times below are in Central Standard Time.<\/strong>\r\n\r\n<span style=\"color: red\">08:00 \u2013 08:10<\/span> Opening remarks\r\n\r\n<span style=\"color: red\">08:10 \u2013 09:10<\/span> Keynote I - <strong>Life-immersive AI for Family Interaction across Space and Time<\/strong>\r\n<a href=\"https:\/\/www.inseokhwang.com\/\">Inseok Hwang<\/a>, POSTECH\r\n\r\nComputer-mediated interaction services connect people over a distance, and often enrich it further with various media sharing. However, we address that people in such an interaction are often \u2018locked in a frame\u2019\u2014which includes an interaction mode, a point in time, or a context of either person. We observe that such lock-ins make it difficult to shape the interaction to be mutually symmetric and empathetic to each other. In this talk, I will present a semantic-equivalent melding of space and time to create a new form of empathetic family interaction. As initial attempts, I will introduce two prototype systems, HomeMeld and MomentMeld, which aims to meld space and time by applying AI, respectively. HomeMeld provides a sense of living together to a family living apart with AI-driven autonomous robotic avatars navigating semantic-equivalent location of the other in one\u2019s house. MomentMeld utilizes an ensemble of visual AI to expand the area of interaction topic from the present, by matching semantic-equivalent photos of each other taken in different times. In-the-wild experiments reveal that HomeMeld and MomentMeld open new forms of empathetic family interaction by computationally melding space and time.\r\n\r\n<strong>Bio:<\/strong> Inseok Hwang is an Assistant Professor of the Department of Computer Science and Engineering at POSTECH. Before joining POSTECH in 2020, he spent six years as a Research Staff Member at IBM Research in Austin, Texas since 2014. His main research theme lies in \u201cintelligent computing infusing real-life\u201d, with special focuses on mobile computing, human-centered systems, and applied AI. He is a recipient of the Best Paper Award from ACM CSCW and multiple Best Demo Awards from ACM MobiSys. He has been actively serving on technical program committees and editorial boards of premier venues in mobile and human-centered computing. He is a prolific inventor of 89 U.S. patents issued to date, in recognition of which he was appointed as an IBM Master Inventor. He obtained his Ph.D. in Computer Science from KAIST in 2013.\r\n\r\n<span style=\"color: red\">09:10 \u2013 09:40<\/span> Break\r\n\r\n<span style=\"color: red\">09:40 \u2013 10:40<\/span> Session 1\r\n\r\n<strong>Towards Memory-Efficient Inference in Edge Video Analytics<\/strong>\r\nArthi Padmanabhan (Microsoft &amp; UCLA), Anand Padmanabha Iyer (Microsoft), Ganesh Ananthanarayanan (Microsoft), Yuanchao Shu (Microsoft), Nikolaos Karianakis (Microsoft), Guoqing Harry Xu (UCLA), Ravi Netravali (Princeton University)\r\n\r\n<strong>Decentralized Modular Architecture for Live Video Analytics at the Edge<\/strong>\r\nSri Pramodh Rachuri (Stony Brook University), Francesco Bronzino (Universit\u00e9 Savoie Mont Blanc), Shubham Jain (Stony Brook University)\r\n\r\n<strong>The Case for Admission Control of Mobile Cameras into the Live Video Analytics Pipeline<\/strong>\r\nFrancescomaria Faticanti (Fondazione Bruno Kessler &amp; University of Trento), Francesco Bronzino (Universit\u00e9 Savoie Mont Blanc), Francesco De Pellegrini (University of Avignon)\r\n\r\n<span style=\"color: red\">10:40 \u2013 11:00<\/span> Break\r\n\r\n<span style=\"color: red\">11:00 \u2013 11:40<\/span> Session 2\r\n\r\n<strong>Enabling High Frame-rate UHD Real-time Communication with Frame-Skipping<\/strong>\r\nTingfeng Wang (Beijing University of Posts and Telecommunications), Zili Meng (Tsinghua University), Mingwei Xu (Tsinghua University), Rui Han (Tencent), Honghao Liu (Tencent)\r\n\r\n<strong>Characterizing Real-Time Dense Point Cloud Capture and Streaming on Mobile Devices<\/strong>\r\nJinhan Hu (Arizona State University), Aashiq Shaikh (Arizona State University), Alireza Bahremand (Arizona State University), Robert LiKamWa (Arizona State University)\r\n\r\n<span style=\"color: red\">11:40 \u2013 13:00<\/span> Lunch\r\n\r\n<span style=\"color: red\">13:00 \u2013 14:00<\/span> Keynote II - <strong>TSM: Temporal Shift Module for Efficient and Scalable Video Understanding on Edge Devices<\/strong>\r\n<a href=\"https:\/\/songhan.mit.edu\/\">Song Han<\/a>, MIT\r\n\r\nToday\u2019s AI is too big. Deep neural networks demand extraordinary levels of data and computation, and therefore power, for training and inference. This severely limits the practical deployment of AI in edge devices. The explosive growth of video requires video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive. We propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance for video understanding. The key idea of TSM is to shift part of the channels along the temporal dimension, thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. TSM achieves a high frame rate of 74fps and 29fps for online video recognition on Jetson Nano and mobile phone. TSM has higher scalability compared to 3D networks, enabling large-scale Kinetics training in 15 minutes. We hope such TinyML techniques can make video understanding smaller, faster, more efficient for both training and deployment.\r\n\r\n<strong>Bio:<\/strong> Song Han is an assistant professor at MIT\u2019s EECS. He received his PhD degree from Stanford University. His research focuses on efficient deep learning computing. He proposed \u201cdeep compression\u201d technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation \u201cefficient inference engine\u201d that first exploited pruning and weight sparsity in deep learning accelerators. His team\u2019s work on hardware-aware neural architecture search that bring deep learning to IoT devices was highlighted by MIT News, Wired, Qualcomm News, VentureBeat, IEEE Spectrum, integrated in PyTorch and AutoGluon, and received many low-power computer vision contest awards in flagship AI conferences (CVPR\u201919, ICCV\u201919 and NeurIPS\u201919). Song received Best Paper awards at ICLR\u201916 and FPGA\u201917, Amazon Machine Learning Research Award, SONY Faculty Award, Facebook Faculty Award, NVIDIA Academic Partnership Award. Song was named \u201c35 Innovators Under 35\u201d by MIT Technology Review for his contribution on \u201cdeep compression\u201d technique that \u201clets powerful artificial intelligence (AI) programs run more efficiently on low-power mobile devices.\u201d Song received the NSF CAREER Award for \u201cefficient algorithms and hardware for accelerated machine learning\u201d and the IEEE \u201cAIs 10 to Watch: The Future of AI\u201d award.\r\n\r\n<span style=\"color: red\">14:00 \u2013 <\/span><span style=\"color: red\">14:40<\/span> Session 3\r\n\r\n<strong>Auto-SDA: Automated Video-based Social Distancing Analyzer<\/strong>\r\nMahshid Ghasemi (Columbia University), Zoran Kostic (Columbia University), Javad Ghaderi (Columbia University), Gil Zussman (Columbia University)\r\n\r\n<strong>Demo: Cost Effective Processing of Detection-driven Video Analytics at the Edge<\/strong>\r\nMd Adnan Arefeen (University of Missouri Kansas City), Md Yusuf Sarwar Uddin (University of Missouri-Kansas City)"}],"msr_startdate":"2022-03-28","msr_enddate":"2022-03-28","msr_event_time":"","msr_location":"Hybrid","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"March 28, 2022","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"<img width=\"960\" height=\"430\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/03\/banner-960x430.png\" class=\"img-object-cover\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/>","event_excerpt":"The 3rd Workshop on Hot Topics in Video Analytics and Intelligent Edges will be hosted in conjunction with\u00a0ACM MobiCom 2021 on October 25, 2021.","msr_research_lab":[],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[382664,212082],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/731629","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":13,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/731629\/revisions"}],"predecessor-version":[{"id":1146886,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/731629\/revisions\/1146886"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/731647"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=731629"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=731629"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=731629"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=731629"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=731629"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=731629"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=731629"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=731629"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=731629"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}