Workshop on Hot Topics in Video Analytics and Intelligent Edges

Workshop on Hot Topics in Video Analytics and Intelligent Edges

About

Cameras are everywhere! Analyzing live videos from these cameras has great potential to impact science and society. Enterprise cameras are deployed for a wide variety of commercial and security reasons. Consumer devices themselves have cameras with users interested in analyzing live videos from these devices. We are all living in the golden era for computer vision and AI that is being fueled by game-changing systemic infrastructure advancements, breakthroughs in machine learning, and copious training data, largely improving their range of capabilities. Live video analytics has the potential to impact a wide range of verticals ranging from public safety, traffic efficiency, infrastructure planning, entertainment, and home safety.

Analyzing live video streams is arguably the most challenging of domains for “systems-for-AI”. Unlike text or numeric processing, video analytics require higher bandwidth, consume considerable compute cycles for processing, necessitate richer query semantics, and demand tighter security & privacy guarantees. Video analytics has a symbiotic relationship with edge compute infrastructure. Edge computing makes compute resources available closer to the data sources (i.e., cameras). All aspects of video analytics call to be designed “green-field”, from vision algorithms, to the systems processing stack and networking links, and hybrid edge-cloud infrastructure. Such a holistic design will enable the democratization of live video analytics such that any organization with cameras can obtain value from video analytics.

Call for Papers

Topics of Interest:
This workshop calls for research on various issues and solutions that can enable live video analytics with the role for edge computing. Topics of interest include (but not limited to) the following:

  • Low-cost video analytics
  • Deployment experience with large array of cameras
  • Storage of video data and metadata
  • Interactive querying of video streams
  • Network design for video streams
  • Hybrid cloud architectures for video processing
  • Scheduling for multi-tenant video processing
  • Training of vision neural networks
  • Edge-based processor architectures for video processing
  • Energy-efficient system design for video analytics
  • Intelligent camera designs
  • Vehicular and drone-based video analytics
  • Tools and datasets for video analytics systems
  • Novel vision applications
  • Video analytics for social good
  • Secure processing of video analytics
  • Privacy-preserving techniques for video processing

Submission Instructions:
Submissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be no longer than five (5) pages, including all figures, tables, followed by as many pages as necessary for bibliographic references. Submissions should be in two-column 10pt ACM format with authors names and affiliations for single-blind peer review. The workshop also solicits the submission of research, platform, and product demonstrations. Demo submission should be a summary or extended abstract describing the research to be presented, maximum one (1) page with font no smaller than 10 point size, in PDF file format. Demo submission title should begin with “Demo:”.

Authors of accepted papers are expected to present their work at the workshop. Papers accepted for presentation will be published in the MobiCom Workshop Proceedings, and available at the ACM Digital Library. You may find these LaTeX and MS-Word templates useful in complying with the above requirements.

Submit your paper at https://hotedgevideo19.hotcrp.com/.

Important Dates:
Paper Submissions Deadline: July 9, 2019
Acceptance Notification: July 31, 2019
Camera-ready Papers Due: August 15, 2019
Workshop Date: October 21, 2019

 

Organizers

Program Committee:

Program

09:00 – 09:10 Opening remarks

09:10 – 10:10 Keynote I – Deep Learning in Mobile Systems, Experiences and Pitfalls
Prof. Heather Zheng, University of Chicago

Deep learning (neural networks) is being rapidly adopted by (mobile) researchers and companies to solve a wide range of computational problems. But is it a panacea to all the (traditionally hard) problems? In this talk, I will share experiences from my lab on applying today’s deep learning models to mobile systems design, and discuss vulnerabilities inherent in many existing deep learning models that make them easy to compromise, as well as potential defenses.

Bio: Dr. Heather Zheng is the Neubauer Professor of Computer Science at University of Chicago. She received her PhD in Electrical and Computer Engineering from University of Maryland, College Park in 1999. She joined University of Chicago after spending 6 years in industry labs (Bell-Labs, NJ and Microsoft Research Asia), and 12 years at University of California at Santa Barbara. At UChicago, she co-directs the SAND Lab (Systems, Algorithms, Networking and Data). She is an IEEE Fellow, World Technology Network Fellow, and recipient of MIT Technology Review’s TR-35 Award (Young Innovators under 35), Bell-Labs President’s Gold award, and Google Faculty award. Her work has been covered by media outlets such as Scientific American, New York Times, Boston Globe, LA Times, and MIT Tech Review. She served as PC co-chair for MobiCom and DySPAN, and is the general co-chair for Hotnets 2020. She is on the steering committee of MobiCom, and is the chair of SIGMOBILE Highlights committee.

10:10 – 10:40 Break


10:40 – 11:40 Session 1 – Cameras are becoming smarter

Networked Cameras Are the New Big Data Clusters
Junchen Jiang, Yuhao Zhou (University of Chicago), Ganesh Ananthanarayanan, Yuanchao Shu (Microsoft Research), Andrew A. Chien (University of Chicago)

Live Video Analytics with FPGA-based Smart Cameras
Shang Wang (University of Electronic Science and Technology of China, Microsoft Research), Chen Zhang, Yuanchao Shu, Yunxin Liu (Microsoft Research)

Space-Time Vehicle Tracking at the Edge of the Network
Zhuangdi Xu, Kishore Ramachandran (Georgia Tech), Sayan Sinha (Indian Institute of Technology Kharagpur)

11:40 – 13:00 Lunch


13:00 – 14:00 Keynote II – 360◦ and 4K Video Streaming for Mobile Devices
Prof. Lili Qiu, University of Texas at Austin

The popularity of 360◦ and/or 4K videos has grown rapidly due to the immersive user experience. 360◦ videos are displayed as a panorama and the view automatically adapts with the head movement. Existing systems stream 360◦ videos in a similar way as regular videos, where all data of the panoramic view is transmitted. This is wasteful since a user only views a small portion of the 360◦ view. To save bandwidth, recent works propose the tile-based streaming, which divides the panoramic view to multiple smaller sized tiles and streams only the tiles within a user’s field of view (FoV) predicted based on the recent head position. Interestingly, the tile-based streaming has only been simulated or implemented on desktops. We find that it cannot run in real-time even on the latest smartphone (e.g., Samsung S7, Samsung S8 and Huawei Mate 9) due to hardware and software limitations. Moreover, it results in significant video quality degradation due to head movement prediction error, which is hard to avoid. Motivated by these observations, we develop a novel tile-based layered approach to stream 360◦ content on smartphones to avoid bandwidth wastage while maintaining high video quality.

Next we explore the feasibility of supporting live 4K video streaming over wireless networks using commodity devices. Coding and streaming live 4K videos incurs prohibitive cost to the network and end system. We propose a novel system, which consists of (i) easy-to-compute layered video coding to seamlessly adapt to unpredictable wireless link fluctuations, (ii) efficient GPU implementation of video coding on commodity devices, and (iii) effectively leveraging both WiFi and WiGig through delayed video adaptation and smart scheduling. Using real experiments and emulation, we demonstrate the feasibility and effectiveness of our system.

Bio: Lili Qiu is a Professor at Computer Science Dept. in UT Austin. She received a Ph.D. in Computer Science from Cornell University in 2001. She was a researcher at Microsoft Research (Redmond, WA) in 2001 — 2004. She joined UT Austin in 2005. She is named IEEE Fellow, ACM Fellow, and ACM Distinguished Scientist. She has also received a NSF CAREER Award, Google Faculty Research Award, and best paper awards in Mobisys’18 and ICNP’17.

14:00 – 14:30 Break


14:30 – 15:30 Session 2 – ML for videos

Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems
Yoshitomo Matsubara, Sabur Hassan Baidya, Davide Callegaro, Marco Levorato, Sameer Singh (University of California, Irvine)

Cracking open the DNN black-box: Video Analytics with DNNs across the Camera-Cloud Boundary
John Emmons, Sadjad Fouladi (Stanford University), Ganesh Ananthanarayanan (Microsoft Research), Shivaram Venkataraman (University of Wisconsin-Madison), Silvio Savarese, Keith Winstein (Stanford University)

secGAN: A Cycle-Consistent GAN for Securely-Recoverable Video Transformation
Hao Wu, Jinghao Feng, Xuejin Tian, Fengyuan Xu, Sheng Zhong (Nanjing University), Yunxin Liu (Microsoft Research), XiaoFeng Wang (Indiana University Bloomington)

15:00 – 15:30 Break


15:30 – 16:10 Session 3 – Playing nice with the network

Client-side Bandwidth Estimation Technique for Adaptive Streaming of a Browser Based Free-Viewpoint Application
Tilak Varisetty, David Dietrich (Leibniz Universität Hannover)

Sensor Training Data Reduction for Autonomous Vehicles
Matthew Tomei, Alex Schwing, (University of Illinois at Urbana-Champaign), Satish Narayanasamy (University of Michigan), Rakesh Kumar (University of Illinois at Urbana-Champaign)

16:10 – 17:00 Poster and demo session