SurroundWeb: Least Privilege for Immersive “Web Rooms”
We introduce SurroundWeb, the ﬁrst least-privilege platform for immersive room experiences. SurroundWeb is a “3D Browser” that gives web pages the ability to display across multiple surfaces in a room, adapt their appearance to objects present in that room, and interact using natural user input. SurroundWeb enables least privilege for these immersive web pages by introducing two new abstractions: ﬁrst, a Room Skeleton that enables least privilege for room rendering, unlike previous approaches that focus on inputs alone. Second, a Detection Sandbox that allows web pages to register content to show if an object is detected, but prevents the web server from knowing if the object is present. SurroundWeb provides three privacy properties: detection privacy, rendering privacy, and interaction privacy while simultaneously enabling Web pages to use object recognition and room display capabilities. Surveys show the information revealed by SurroundWeb is acceptable. SurroundWeb is practical: After a one-time setup procedure that scans a room for projectable surfaces in about a minute, our prototype can render immersive multi-display web rooms at greater than 30 frames per second with up to 25 screens and up to a 1440×720 display. We demonstrate a range of previously proposed and novel experiences can be implemented in a least-privilege way using SurroundWeb.
Behind the Scenes: Microsoft's Principal Researcher Eyal Ofek speaking about technical and social perspectives of XR The XR Bootcamp Open Lecture Series continues with Microsoft's Principal Researcher Eyal Ofek! Virtual Reality (VR) & Augmented reality (AR) pose challenges and opportunities from both a technical and social perspective. We could now have digital, and not physical objects change our understanding of the world around us. It is a unique opportunity to change reality as we sense it. The Microsoft Researchers are looking for new possibilities to extend our abilities when we are not bound by our physical limitations, enabling superhuman abilities on one hand, and leveling the playfield for people with physical limitations. Dr. Ofek will describe efforts to design VR & AR applications that will adjust according to the user’s uncontrolled environment, enabling a continuous use during work and leisure, over the large variance of environments. He will also review efforts to the extent the rendering to new capabilities such as haptic rendering. His lecture will be followed by a Q&A session where you can ask all your questions about the topic. Lead Instructors: Eyal Ofek is a principal researcher at the Microsoft Research lab in Redmond, WA. His research interests include Augmented Reality (AR)/Virtual Reality (VR), Haptics, interactive projection mapping, and computer vision for human-computer interaction. He is also the Specialty Chief Editor of Frontiers in Virtual Reality, for the area of Haptics and an Assoc. Editor of IEEE Computer Graphics and Application (CG&A). Prior to joining Microsoft Research, he obtained his Ph.D. at the Hebrew University of Jerusalem and has founded a couple of companies in computer graphics, including a successful drawing and photo editing application and developing the world's first time-of-flight video cameras which was a basis for the HoloLens depth camera. This event is part of the Global XR Bootcamp event: The Global XR Bootcamp 2020 will be the biggest community-driven, FREE, online Virtual, Augmented and Mixed Reality event in the world! Join us on YouTube or AltspaceVR for a 24 hour live stream with over 50 high quality talks, panels and sessions. Meet your fellow XR enthousiasts in our Community Zone, and win amazing prizes - from vouchers to XR hardware. YouTube: https://www.youtube.com/watch?v=_X7sFAfU-20&feature=emb_logo