{"id":611553,"date":"2020-06-12T10:55:43","date_gmt":"2020-06-12T17:55:43","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-group&#038;p=611553"},"modified":"2025-01-21T06:43:45","modified_gmt":"2025-01-21T14:43:45","slug":"swiss-joint-research-center","status":"publish","type":"msr-group","link":"https:\/\/www.microsoft.com\/en-us\/research\/collaboration\/swiss-joint-research-center\/","title":{"rendered":"Swiss Joint Research Center"},"content":{"rendered":"<section class=\"mb-3 moray-highlight\">\n\t<div class=\"card-img-overlay mx-lg-0\">\n\t\t<div class=\"card-background  has-background- card-background--full-bleed\">\n\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1920\" height=\"720\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/SwissJRC_2020_v4.jpg\" class=\"attachment-full size-full\" alt=\"members of the Swiss JRC between Microsoft, ETH Zurich, and EPFL\" style=\"\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/SwissJRC_2020_v4.jpg 1920w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/SwissJRC_2020_v4-300x113.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/SwissJRC_2020_v4-1024x384.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/SwissJRC_2020_v4-768x288.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/SwissJRC_2020_v4-1536x576.jpg 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/SwissJRC_2020_v4-1600x600.jpg 1600w\" sizes=\"auto, (max-width: 1920px) 100vw, 1920px\" \/>\t\t<\/div>\n\t\t<!-- Foreground -->\n\t\t<div class=\"card-foreground d-flex mt-md-n5 my-lg-5 px-g px-lg-0\">\n\t\t\t<!-- Container -->\n\t\t\t<div class=\"container d-flex mt-md-n5 my-lg-5 align-self-center\">\n\t\t\t\t<!-- Card wrapper -->\n\t\t\t\t<div class=\"w-100 w-lg-col-5\">\n\t\t\t\t\t<!-- Card -->\n\t\t\t\t\t<div class=\"card material-md-card py-5 px-md-5\">\n\t\t\t\t\t\t<div class=\"card-body \">\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n<h1 class=\"wp-block-heading h2\" id=\"swiss-joint-research-center\">Swiss Joint Research Center<\/h1>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n<p>The Swiss Joint Research Center, Swiss JRC, a collaborative research engagement between Microsoft Research and the two universities that make up the Swiss Federal Institutes of Technology:\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/ethz.ch\/en.html\" target=\"_blank\" rel=\"noopener noreferrer\">ETH Zurich<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0and <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.epfl.ch\/en\/\" target=\"_blank\" rel=\"noopener noreferrer\">EPFL<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> in Lausanne. Since its inception in 2008, the Swiss JRC has supported dozens of collaborative research projects between Microsoft and the two academic partners. <\/p>\n\n\n\n<p>The Swiss JRC has close links with the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/spatial-ai-zurich\/?msockid=2babe87cd6a26b0b22d2fd4cd7856a13\">Spatial AI Lab &#8211; Zurich<\/a> <\/p>\n\n\n\n<p><\/p>\n\n\n\n<div style=\"height: 20px\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"collaborators\">Collaborators<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/www.epfl.ch\/en\/\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"200\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/09\/EPFL_300x200.jpg\" alt=\"EPFL logo\" class=\"wp-image-691188\" \/><\/a><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/www.c4dt.org\/\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"200\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/09\/C4DT_300x200.jpg\" alt=\"Center for Digital Trust logo\" class=\"wp-image-691194\" \/><\/a><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/ethz.ch\/en.html\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"200\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/09\/eth_300x200.jpg\" alt=\"ETH Zurich logo\" class=\"wp-image-691200\" \/><\/a><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/ecocloud.ch\/\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"200\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/10\/ecocloud-logo_300x200.jpg\" alt=\"ecocloud logo\" class=\"wp-image-695781\" \/><\/a><\/figure>\n<\/div>\n<\/div>\n\n\n\n\n\n<h3 class=\"wp-block-heading\" id=\"epfl-projects-2022-2023\">EPFL projects (2022-2023)<\/h3>\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;Bruno Correia (and Michael Bronstein, Imperial)<br><strong>Microsoft PIs:<\/strong>&nbsp;Max Welling,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cmbishop\/\">Chris Bishop<\/a><br><strong class=\"\">PhD Student:<\/strong>&nbsp;Freyr Sverisson, Arne Scheuing<\/p>\n\n\n\n<p>Proteins play a crucial role in every form of life. The function of proteins is largely determined by their 3D structure and the way they interact with other molecules. Understanding the mechanisms that govern protein structure and their interactions with other molecules is a holy grail of biology that also paves the path to ground-breaking new applications in biotechnology and medicine. Over the past three decades, large amounts of structural data on proteins has been made available to the wide-scientific community. This has created opportunities for machine learning (ML) approaches to improve our ability to better understand the governing principles of these molecules, as well as to develop computational approaches for the design of novel proteins and small molecules drugs. The three-dimensional structures of proteins and imolecular objects are a natural fit for&nbsp;<em>Geometric Deep Learning<\/em>&nbsp;(GDL). In this proposal, we will develop GDL-based approaches that describe molecular entities using point clouds engraved with descriptors capturing physical features (geometry and chemistry) that will be optimized to describe different aspects of proteins. Specifically, through the aims of this grant we will attempt to: capture dynamic features of protein surfaces (Aim 1); leverage the surface descriptors to condition the generation of small-molecules to engage specific pockets (Aim 2); couple new structure prediction algorithms with surface descriptor optimization for the design of new functions in proteins (Aim 3). Towards the generative aspects of our application (designing new surfaces, small-molecules, proteins), a common problem is that the spaces to be sampled are extremely large and thus the expertise within the Microsoft Research team could be critical to reach a functional solution. Specifically, the expertise in variational autoencoders, equivariant architectures and Bayesian optimization will be of major importance. In summary, we propose a novel approach powered by cutting edge computational methods to model and design&nbsp;<em>de novo<\/em>&nbsp;proteins that globally has an enormous potential to help addressing problems in medicine and biotechnology.<\/p>\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;Alexander Mathis, Friedhelm Hummel, Silvestro Micera<br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><br><strong class=\"\">PhD Student:<\/strong>&nbsp;Haozhe Qi<\/p>\n\n\n\n<p>Despite many advances in neuroprosthetics and neurorehabilitation, the techniques to measure, to personalize and thus to optimize the functional improvements that patients gain with therapy are limited. Impairments remain to be assessed by standardized functional tests, which fail to capture everyday behaviour and quality of life or allow to be well used for personalization and have to be performed by trained health care professionals in the clinical environment. By leveraging recent advances in motion capture and hardware, we will create novel metrics to evaluate, personalize and improve the dexterity of patients in their everyday life. We will utilize the EPFL Smart Kitchen platform to assess naturalistic behaviour in the kitchen of both healthy subjects, upper-limb amputees and stroke patients filmed from a head mounted camera (Microsoft HoloLens). We will develop a computer vision pipeline that is capable of measuring hand-object interactions in patient\u2019s kitchens. Based on this novel, large-scale dataset collected in patient\u2019s kitchens, we will derive metrics that measure dexterity in the \u201cnatural world,\u201d as well as recovered and compensatory movements due to the pathology\/assistive device. We will also use those data, to assess novel control strategies for neuroprosthetics and design optimal, personalized rehabilitation treatment by leveraging virtual reality.<\/p>\n\n\n\n\n\n\n\n<p><strong>EPFL PIs:<\/strong> Robert West, Valentin Hartmann, Maxime Peyrard<br><strong>Microsoft PIs:<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/emrek\/\">Emre K\u0131c\u0131man<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/rsim\/\">Robert Sim<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/shtople\/\">Shruti Tople<\/a><br><strong>PhD Student:<\/strong> Valentin Hartmann<\/p>\n\n\n\n<p>As machine learning (ML) models are becoming more complex, there has been a growing interest in making use of decentrally generated data (e.g., from smartphones) and in pooling data from many actors. At the same time, however, privacy concerns about organizations collecting data have risen. As an additional challenge, decentrally generated data is often highly heterogeneous, thus breaking assumptions needed by standard ML models. Here, we propose to \u201ckill two birds with one stone\u201d by developing Invariant Federated Learning, a framework for training ML models without directly collecting data, while not only being robust to, but even benefiting from, heterogeneous data. For the problem of learning from distributed data, the Federated Learning (FL) framework has been proposed. Instead of sharing raw data, clients share model updates to help train an ML model on a central server. We combine this idea with the recently proposed Invariant Risk Minimization (IRM) approach, a solution for causal learning. IRM aims to build models that are robust to changes in the data distribution and provide better out-of-distribution (OOD) generalization by using data from different environments during training. This integrates naturally with FL, where each client may be seen as constituting its own environment. We seek to gain robustness to distributional changes and better OOD generalization, as compared to FL methods based on the standard empirical risk minimization. Previous work has further shown that causal models possess better privacy properties than associational models [26]. We will turn these theoretical insights into practical algorithms to, e.g., provide Differential Privacy guarantees for FL. The project proposed here integrates naturally with ideas pursued in the context of the Microsoft Turing Academic Program (MS-TAP), where the PI\u2019s lab is already collaborating with Microsoft (including Emre K\u0131c\u0131man, a co-author of this proposal) in order to make language models more robust via IRM.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PI:<\/strong>&nbsp;Giuseppe Carleo<br><strong>Microsoft PIs:<\/strong>&nbsp;Max Welling,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cmbishop\/\">Chris Bishop<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mtroyer\/\">Matthias Troyer<\/a><br><strong class=\"\">PhD Student:<\/strong>&nbsp;Jannes Nys<\/p>\n\n\n\n<p>The fundamental equations governing interacting quantum-mechanical matter in solids have been known for over 90 years. However, these equations are simply \u201cmuch too complicated to be soluble\u201d (Paul A. M. Dirac, 1929). Besides experiments, the main source of information that we have available originates from computational methods to simulate these systems. Machine learning approaches based on artificial neural networks (NN) have recently been shown to be a new powerful tool in simulating systems governed by the laws of quantum mechanics. The leading approach in the field, pioneered by Carleo and Troyer, are known as neural quantum states, and have been successfully applied to several model quantum systems. For these typically prototypical and simplified \u2013 yet hard to solve\u2013 models of interacting quantum matter, neural quantum states have shown state-of-the-art \u2013 or better \u2013 performance. Despite this success, however, the application of neural quantum states to the ab-initio simulation of solids and materials is largely unexplored, both theoretically and computationally. Compared to the method for quantum spin systems, this requires methods that intrinsically work on continuous degrees of freedom, rather than discrete ones. Examples of important systems that can be studied with continuous space methods are crystals and several phases of matter that show a periodic lattice structure. In this project, we will introduce deep-learning-based approaches for the ab-initio simulation of solids, with a focus on imposing physical symmetries and scalability. With a powerful and efficient computational method to simulate continuous-space atomic quantum systems, we will be able to access unprecedented regimes of accuracy for the descriptions of materials, especially in two dimensions, where strong interactions are dominant.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PI:<\/strong>&nbsp;Pascal Fua<\/p>\n\n\n\n<p><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cmbishop\/\">Chris Bishop<\/a><br><strong>Research Engineer: <\/strong>Benoit Gherardi<\/p>\n\n\n\n<p>We live in a three-dimensional world full of manufactured objects of ever-increasing complexity. To be functional, they require clever engineering and design. The search for energy-efficient designs of objects, such as the windmill exemplifies the challenges and promises of such engineering: The blades must have the right shapes to harness as much energy from the wind by balancing lift and drag, and the whole assembly must be strong and light. With ever more powerful simulation techniques and the advent of digital sensors that enable precise measurements, shape engineering relies increasingly on the resulting algorithmic developments. As a result, Computer Aided Design (CAD) has become central to engineering but is not yet capable of addressing all the relevant issues simultaneously. Computer Vision and Computer Graphics are among the fields with the greatest potential for impact in CAD, especially given the remarkable progress that deep learning has fostered in these fields. For example, continuous deep implicit-fields have recently emerged as one of the most promising 3D shape-modeling approaches for objects that can be represented by a single watertight surface.<\/p>\n\n\n\n<p>However, current approaches to modeling complex composite objects cannot jointly account for geometric, topological, engineering constraints as well as for performance requirements. To remedy this, we will build latent models that can be used to represent and optimize complex composite shapes while strictly enforcing compatibility constraints between their components and controllability constraints on the whole. A central focus will be on developing training methods that guarantee that the output of the deep networks we train strictly obey these constraints, something that existing methods that rely on adding ad hoc loss functions cannot do. The results will be integrated into Microsoft\u2019s simulation platforms\u2014 AirSim and Bonsai \u2014with a view to rapidly building and designing real-world robots.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;Edouard Bugnion, Mathias Payer<br><strong>Microsoft PIs:<\/strong>&nbsp;Adrien Ghosn<br><strong>PhD Student:<\/strong> Charly Castes<\/p>\n\n\n\n<p>Confidential computing is an increasingly popular means to wider Cloud adoption. By offering confidential virtual machines and enclaves, Cloud service providers now host organizations, such as banks and hospitals, that abide by stringent legal requirement with regards to their client\u2019s data confidentiality. These technologies foster sufficient trust to enable such clients to transition to the Cloud, while protecting themselves against a potentially compromised or malicious host. Unfortunately, confidential computing solutions depend on bleeding-edge emerging hardware that (1) takes long to roll out at the Cloud scale and (2) as a recent technology, lacks a clear consensus on both the underlying hardware mechanisms and the exposed programming model and is thus bound to frequent changes and potential security vulnerabilities. This proposal strives to explore the possibilities of building confidential systems without special hardware support. Instead, we will leverage existing commodity hardware that is already deployed in Cloud datacenters combined with new programming language and formal method techniques and identify how to provide similar or even more elaborate confidentiality and integrity guarantees than the existing confidential hardware. Achieving such a software\/hardware co-design will enable Cloud providers to deploy new Cloud products for confidential computing without waiting for neither the standardization nor the wide installation of confidential hardware. The key goal of this project is the design and implementation of a trusted, attested, and formally verified monitor acting as a trusted intermediary between resource managers, such as a Cloud hypervisor or an OS, and their clients, e.g., confidential virtual machines and applications. We plan to explore how commodity hardware features, such as hardware support for virtualization, can be leveraged in the implementation of such a solution with as little modification as possible to existing hypervisor implementations.<\/p>\n\n\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"eth-zurich-projects-2022-2023\">ETH Zurich projects (2022-2023)<\/h3>\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PIs:<\/strong>&nbsp;Roi Poranne, Stelian Coros<br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jedelmer\/\">Jeffrey Delmerico<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/juannieto\/\">Juan Nieto<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><br><strong class=\"\">PhD Student:<\/strong>&nbsp;Florian-Kennel-Maushart<\/p>\n\n\n\n<p>Despite popular depictions in sci-fi movies and TV shows, robots remain limited in their ability to autonomously solve complex tasks. Indeed, even the most advanced commercial robots are only now just starting to navigate man-made environments while performing simple pick-and-place operations. In order to enable complex high-level behaviours, such as the abstract reasoning required to manoeuvre objects in highly constrained environments, we propose to leverage human intelligence and intuition. The challenge here is one of representation and communication. In order to communicate human insights about a problem to a robot, or to communicate a robot\u2019s plans and intent to a human, it is necessary to utilize representations of space, tasks, and movements that are mutually intelligible for both human and robot. This work will focus on the problem of single and multi-robot motion planning with human guidance, where a human assists a team of robots in solving a motion-based task that is beyond the reasoning capabilities of the robot systems. We will exploit the ability of Mixed Reality (MR) technology to communicate spatial concepts between robots and humans, and will focus our research efforts on exploring the representations, optimization techniques, and multi-robot task planning necessary to advance the ability of robots to solve complex tasks with human guidance.<\/p>\n\n\n\n\n\n<p><strong>ETH Zurich PI:<\/strong> Srdjan Capkun, Shweta Shinde<br><strong class=\"\">Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/manuelc\/\">Manuel Costa<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fournet\/\">Cedric Fournet<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/svolos\/\">Stavros Volos<\/a><br><strong>PhD Student:<\/strong> Ivan Puddu<\/p>\n\n\n\n<p>Goal of this research project is to reduce the trust placed in the Cloud Service Provider by increasing the control of the customer over the resources assigned to it in the cloud infrastructure. <\/p>\n\n\n\n<p>We plan to investigate this specifically in the context of a device or chiplet owned by the client and then placed within the cloud infrastructure, an \u201cEmbassy Hardware Device\u201d. Such device would be able to control, manage, and retain access to the data while remaining inaccessible (in terms of data and control flow) to the Cloud Service Provider. Several research challenges need to be solved in order to develop an end-to-end working prototype.\u202f<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;Onur Mutlu<br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ssaroiu\/\">Stefan Saroiu<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/alecw\/\">Alec Wolman<\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/mark-hill-a0b9a21b4\/\">Mark Hill<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/moscitho\/\">Thomas Moscibroda<\/a><br><strong>PhD Student<\/strong>: Giray Yaglikci<\/p>\n\n\n\n<p>DRAM is the prevalent technology used to architect main memory across a wide range of computing platforms. Unfortunately, DRAM suffers from the RowHammer vulnerability. RowHammer is caused by repeatedly accessing (i.e., hammering) a DRAM row such that the electro-magnetic interference that develops due to the rapid DRAM row activations causing bit flips in DRAM rows that are physically nearby the hammered row. Prior research demonstrates that the RowHammer vulnerability of DRAM chips worsens as DRAM cell size and cell-to-cell spacing shrink. Numerous works demonstrate RowHammer attacks to escalate user privileges, obtain private keys, manipulate sensitive data, and destroy the accuracy of neural networks. Given that the RowHammer vulnerability of modern DRAM chips worsens and can be used to compromise a wide range of computing platforms, it is crucial to fundamentally understand and solve RowHammer to ensure secure and reliable DRAM operation. Our goal in this project is to<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>rigorously study the unexplored aspects of RowHammer via rigorous experiments, using hundreds of real DRAM chips, and leverage all the understanding we develop to<\/li>\n\n\n\n<li>experimentally analyze the security guarantees of existing RowHammer mitigation mechanisms (e.g., Tar-get Row Refresh (TRR)),<\/li>\n\n\n\n<li>craft more effective RowHammer access patterns, and<\/li>\n\n\n\n<li>design completely secure, efficient, and low-cost RowHammer mitigation mechanisms.<\/li>\n<\/ol>\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;Otmar Hilliges<br><strong>Microsoft PI:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/valentinjulien\/\">Julien Valentin<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>PhD Student<\/strong>: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/ait.ethz.ch\/people\/chen\/\" target=\"_blank\" rel=\"noopener noreferrer\">Chen Guo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>Digital capture of human bodies is a rapidly growing research area in computer vision and computer graphics that puts scenarios such as life-like Mixed Reality (MR) virtual-social interactions into reach, albeit not without overcoming several challenging research problems. A core question in this respect is how to faithfully transmit a virtual copy of oneself so that a remote collaborator may perceive the interaction as immersive and engaging. To present a real alternative to face-to-face meetings, future AR\/VR systems will crucially depend on the following two core building blocks:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>means to capture the 3D geometry and appearance (e.g., texture, lighting) of individuals with consumer-grade infrastructure (e.g., a single RGB-D camera) and with very little time and expertise and<\/li>\n\n\n\n<li>means to represent the captured geometry and appearance information in a fashion that is suitable for photorealistic rendering under fine-grained control over the underlying factors such as pose and facial expressions amongst others.<\/li>\n<\/ol>\n\n\n\n<p>In this project, we plan to develop novel methods to learn animatable representations of humans from \u2018cheap\u2019 data sources alone. Furthermore, we plan to extend our own recent work on animatable neural implicit surfaces, such that it can represent not only the geometry but also the appearance of subjects in high visual fidelity. Finally, we plan to study techniques to enforce geometric and temporal consistency in such methods to make them suitable for MR and other telepresence downstream applications.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;Christian Holz<br><strong>Microsoft PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/tadas-baltrusaitis-234b1234\/\">Tadas Baltrusaitis<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>PhD Student:<\/strong> Bj\u00f6rn Braun<\/p>\n\n\n\n<p>The&nbsp;<em>passive<\/em>&nbsp;measurement of cognitive stress and its impact on performance in cognitive tasks has a huge potential for human-computer interaction (HCI) and affective computing, including workload optimization or \u201cflow\u201d understanding for future of work productivity scenarios, remote learning, automated tutor systems, as well as stress monitoring, mental health, and telehealth applications more generally. When cognitive demands exceed resources, people experience stress and task performance degrades. In this project, we will develop intelligent software experiences that reduce workers\u2019 stress and optimize their cognitive resources. We will develop sensing models that capture the body\u2019s autonomic nervous system (\u201cfight or flight\u201d) responses to cognitive demands in real-time using information from multiple physiologic processes. These inputs will then help drive AI support that adapts to provide cognitive support while also maintaining autonomy (e.g., avoiding unnecessary and annoying interventions.) Specifically, we will develop novel computer vision and signal processing approaches for measuring cardiovascular, respiratory, pupil\/ocular, and dermal changes using ubiquitous sensors. For desktop environments, we will develop, evaluate, and demonstrate our methods using non-contact sensing (the webcams built into PCs). For head-mounted displays, we will appropriate our methods to utilize signals originating from the wearer\u2019s head using built-in headset sensors. In both cases, our developments will produce novel datasets, computational methods, and the results of in-situ evaluations in productivity scenarios. Using our novel methods, we will also investigate their implications for telehealth scenarios, which often contain cardiovascular and respiratory assessments. We will develop scenarios that guide the user while assessing these metrics and visually present the remote physician with the results for examination.<\/p>\n\n\n\n\n\n<p><strong>ETH Zurich PI:<\/strong>&nbsp;Sebastian Kozerke<br><strong>Microsoft PI:<\/strong>&nbsp;Michael Hansen<br><strong>PhD Student:<\/strong>&nbsp;Pietro Dirix<\/p>\n\n\n\n<p>Cardiovascular Magnetic Resonance Imaging (MRI) has become a key imaging modality to diagnose, monitor and stratify patients suffering from a wide range of cardiovascular diseases. Using Flow MRI, time-resolved blood flow patterns can be quantified throughout the circulatory system providing information on the interplay of anatomical and hemodynamic conditions in health and disease.<\/p>\n\n\n\n<p>Today, inference of Flow MRI data is based on data post-processing, which includes massive data reduction to yield metrics such as mean and peak flow, kinetic energy, and wall shear rates. In consequence of the data reduction step, however, the wealth of information encoded in the data including fundamental causal relations are potentially missed. In addition, the dependency of the metrics on parameters of the measurement and image reconstruction process itself compromises the diagnostic yield and the reproducibility of the method, hence hampering further dissemination.<\/p>\n\n\n\n<p>Here we propose to develop and implement a computational framework for Flow Tensor MRI data synthesis to train physics-based neural networks for image reconstruction and inference of the complex interplay of anatomy, coherent and incoherent flows in the aorta in-vivo. Using cloud-based, scalable computing resources, we will demonstrate that synthetically trained reconstruction and inference machines permit high-speed image reconstruction and inference to unravel complex structure-function relations using real-world in-vivo Flow Tensor MRI by exploiting the entirety of information contained in the data along with the information of the measurement process itself.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PIs:<\/strong>&nbsp;Marco Tognon, Mike Allenspach, Nicholas Lawrence, Roland Siegwart<br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jedelmer\/\">Jeffrey Delmerico<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/juannieto\/\">Juan Nieto<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><br><strong>PhD Student:<\/strong> Mike Allenspach<\/p>\n\n\n\n<p>Our objective is to exploit recent developments in MR to enhance human capabilities with robotic assistance. Robots offer mobility and power but are not capable of performing complex tasks in challenging environments such as construction, contact-based inspection, cleaning, and maintenance. On the other hand, humans have excellent higher-order reasoning, and skilled workers have the experience and training to adapt to new circumstances quickly and effectively. However, they lack in mobility and power. We envision to reduce this limitation by empowering human operators with the assistance and the capabilities provided by a robot system. This requires a human-robot interface that fully leverages the capabilities of both the human operator and the robot system. In this project we aim to explore the problem of shared autonomy for physical interaction tasks in shared physical workspaces. We will explore how an operator can effectively command a robot system using a MR interface over a range of autonomy levels from low-level direct teleoperation to high-level task specification. We will develop methods for estimating the intent and comfort level of an operator to provide an intuitive and effective interface. Finally, we will explore how to pass information from the robot system back to the human operator for effective understanding of the robot\u2019s plans. We will prove the value of mixed reality interfaces by enhancing human capabilities with robot systems through effective, bilateral communication for a wide variety of complex tasks.<\/p>\n\n\n\n\n\n<p><strong>ETH Zurich PI:<\/strong> Shweta Shinde<br><strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/manuelc\/\">Manuel Costa<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fournet\/\">Cedric Fournet<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/svolos\/\">Stavros Volos<\/a><br><strong>PhD Student:<\/strong> Mark Kuhne<\/p>\n\n\n\n<p>Goal of the is research project is to give visibility on whether any abuse is happening, particularly if it is happening from untrusted software (e.g., Operating System, Hypervisor) or trusted-but-erroneous software (e.g., Trusted Execution Environment management).<\/p>\n\n\n\n<p>The key idea is to have a small, trusted software to check the runtime behavior of the untrusted and trusted-but-erroneous software. Such a minimal security monitor can restrict the privileged software\u2019s capabilities and visibility over the system while still adequately managing the resources.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;Kaveh Razavi<br><strong>Microsoft PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bokoepf\/\">Boris K\u00f6pf<\/a><br><strong>PhD Student:<\/strong> Flavien Solt<\/p>\n\n\n\n<p>There is currently a large gap between the capabilities of Electronic Design Automation (EDA) tools and what is required to detect various classes of microarchitectural vulnerabilities pre-silicon. This project aims to bridge this gap by leveraging recent advances in software testing to produce the necessary knowledge and tools for effective hardware testing. Our driving hypothesis is that if we could provide crucial information about the privilege and domain of instructions and\/or data in the microarchitecture during simulation or emulation, then we can easily detect many classes of microarchitectural vulnerabilities. As an example, with the right test cases, we could detect Meltdown-type vulnerabilities since seemingly different variants all require an instruction that can access data from a different privilege domain.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;Kenneth G. Paterson<br><strong>Microsoft PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fournet\/\">C\u00e9dric Fournet<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/esghosh\/\">Esha Gosh<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mnaehrig\/\" target=\"_blank\" rel=\"noreferrer noopener\">Michael Naehrig<\/a><br><strong>PhD Student:<\/strong>&nbsp;Mia Fili\u0107<\/p>\n\n\n\n<p>Probabilistic data structures (PDS) are becoming extremely widely used in practice in the era of \u201cbig data\u201d. They are used to process large data sets, often in a streaming setting, and to provide approximate answers to basic data exploration questions such as \u201cHas a particular bit-string in this data stream been encountered before?\u201d or \u201cHow many distinct bit-strings are there in this data set?\u201d. They are increasingly supported in systems like Microsoft Azure Data Explorer, Google Big Query, Apache Spark, Presto and Redis, and there is an active research community working on PDS within computer science. Generally, PDS are designed to perform well \u201cin the average case\u201d, where the inputs are selected independently at random from some distribution. This we refer to as the non-adversarial setting. However, they are increasingly being used in adversarial settings, where the inputs can be chosen by an adversary interested in causing the PDS to perform badly in some way, e.g. creating many false positives for a Bloom filter, or underestimating the set cardinality for a cardinality estimator. In recent work, we performed an in-depth analysis of the HyperLogLog (HLL) PDS and its security under adversarial input. The proposed research will extend our prior work in three directions:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>address the mergeability problem for HLL;<\/li>\n\n\n\n<li>extend our simulation-based framework for studying the correctness and security of HLL to other PDS in adversarial settings;<\/li>\n\n\n\n<li>study the specific case of cascaded Bloom filters, which have been proposed for use in CRLite, a privacy-preserving system for managing certificate revocation for the webPKI.<\/li>\n<\/ol>\n\n\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"epfl-projects-2019-2021\">EPFL projects (2019-2021)<\/h3>\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/people.epfl.ch\/pascal.fua\" target=\"_blank\" rel=\"noopener noreferrer\">Pascal Fua<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/people.epfl.ch\/Mathieu.Salzmann\" target=\"_blank\" rel=\"noopener noreferrer\">Mathieu Salzmann<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/butekin\/\">Bugra Tekin<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta Sinha<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/mengshi-qi-684abb97\/\">Mengshi Qi<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>In recent years, there has been tremendous progress in camera-based 6D object pose, hand pose, and human 3D pose estimation. They can now both be done in real-time but not yet to the level of accuracy required to properly capture how people interact with each other and with objects, which is a crucial component of modeling the world in which we live. For example, when someone grasps an object, types on a keyboard, or shakes someone else\u2019s hand, the position of their fingers with respect to what they are interacting with must be precisely recovered for the resulting models to be used by AR devices, such as the HoloLens device or consumer-level video see-through AR ones. This remains a challenge, especially given the fact that hands are often severely occluded in the egocentric views that are the norm in AR. We will, therefore, work on accurately capturing the interaction between hands and objects they touch and manipulate. At the heart of it, will be the precise modeling of contact points and the resulting physical forces between interacting hands and objects. This is essential for two reasons. First, objects in contact exert forces on each other; their pose and motion can only be accurately captured and understood if reaction forces at contact points and areas are modeled jointly. Second, touch and touch-force devices, such as keyboards and touch-screens are the most common human-computer interfaces, and by sensing contact and contact forces purely visually, everyday objects could be turned into tangible interfaces, that react as if they were equipped with touch-sensitive electronics. For instance, a soft cushion could become a non-intrusive input device that, unlike virtual mid-air menus, provides natural force feedback. In this talk, I will present some of our preliminary results and discuss our research agenda for the year to come.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/people.epfl.ch\/robert.west\" target=\"_blank\" rel=\"noopener noreferrer\">Robert West<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www3.unifr.ch\/med\/de\/section\/staff\/prof\/people\/299312\/06a29\" target=\"_blank\" rel=\"noopener noreferrer\">Arnaud Chiolero<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ryenw\/\">Ryen White<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/horvitz\/\">Eric Horvitz<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/emrek\/\">Emre Kiciman<\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/kristinagligoric\/\">Kristina Gligoric<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>The overall goal of this project is to develop methods for monitoring, modeling, and modifying dietary habits and nutrition based on large-scale digital traces. We will leverage data from both EPFL and Microsoft, to shed light on dietary habits from different angles and at different scales: Our team has access to logs of food purchases made on the EPFL campus with the badges carried by all EPFL members. Via the Microsoft collaborators involved, we have access to Web usage logs from IE\/Edge and Bing, and via MSR\u2019s subscription to the Twitter firehose, we gain full access to a major social media platform. Our agenda broadly decomposes into three sets of research questions: (1) Monitoring and modeling: How to mine digital traces for spatiotemporal variation of dietary habits? What nutritional patterns emerge? And how do they relate to, and expand, the current state of research in nutrition? (2) Quantifying and correcting biases: The log data does not directly capture food consumption, but provides indirect proxies; these are likely to be affected by data biases, and correcting for those biases will be an integral part of this project. (3) Modifying dietary habits: Our lab is co-organizing an annual EPFL-wide event called the Act4Change challenge, whose goal is to foster healthy and sustainable habits on the EPFL campus. Our close involvement with Act4Change will allow us to validate our methods and findings on the ground via surveys and A\/B tests. Applications of our work will include new methods for conducting population nutrition monitoring, recommending better-personalized eating practices, optimizing food offerings, and minimizing food waste.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PI:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/people.epfl.ch\/tobias.kippenberg\" target=\"_blank\" rel=\"noopener noreferrer\">Tobias J. Kippenberg<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hiballan\/\">Hitesh Ballani<\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/arslansajid1\/\">Arslan Raja<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>The substantial increase in optical data transmission, and cloud computing, has fueled research into new technologies that can increase communication capacity. Optical communication through fiber, which traditionally has been used for long haul fiber optical communication, is now also employed for short haul communication, even with data-centers. In a similar vein, the increasing capacity crunch in optical fibers, driven in particular by video streaming, can only be met by two degrees of freedom: spatial and wavelength division multiplexing. Spatial multiplexing refers to the use of optical fibers that have multiple cores, allowing to transmit the same carrier wavelength in multiple fibers. Wavelength division multiplexing (WDM or dense-DWM) refers to the use of multiple optical carriers on the same fiber. A key advantage of WDM is the ability to increase line-rates on existing legacy network, without requirements to change existing SMF28 single mode fibers. WDM is also expected to be employed in data-centers. Yet to date, WDM implementation within datacenters faces a key challenge: a CMOS compatible, power efficient source of multi-wavelengths. Currently employed existing solutions, such as multi-laser chips based on InP (as developed by Infinera) cannot be readily scaled to a larger number of carriers. As a result, the currently prevalently employed solution is to use a bank of multiple, individual laser modules. This approach is not viable for datacenters due to space and power constraints. Over the past years, a new technology has rapidly matured \u2013 that was developed by EPFL \u2013 microresonator frequency combs, or microcombs that satisfy these requirements. The potential of this new technology in telecommunications has recently been demonstrated with the use of microcombs for massively coherent parallel communication on the receiver and transmitter side. Yet to date the use of such micro-combs in data-centers has not been addressed.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Kippenberg, T. J., Gaeta, A. L., Lipson, M. & Gorodetsky, M. L. Dissipative Kerr solitons in optical microresonators. Science 361, eaan8083 (2018).<\/li>\n\n\n\n<li>Brasch, V. et al. Photonic chip\u2013based optical frequency comb using soliton Cherenkov radiation. Science aad4811 (2015). doi:10.1126\/science.aad4811<\/li>\n\n\n\n<li>Marin-Palomo, P. et al. Microresonator-based solitons for massively parallel coherent optical communications. Nature 546, 274\u2013279 (2017).<\/li>\n\n\n\n<li>Trocha, P. et al. Ultrafast optical ranging using microresonator soliton frequency combs. Science 359, 887\u2013891 (2018).<\/li>\n<\/ol>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/people.epfl.ch\/edouard.bugnion\" target=\"_blank\" rel=\"noopener noreferrer\">Edouard Bugnion<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/irzha\/\">Irene Zhang<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dports\/\">Dan Ports<\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/marios-kogias-41815333\/?originalSubdomain=uk\">Marios Kogias<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/people.epfl.ch\/konstantinos.prasopoulos\/?lang=en\">Konstantinos Prasopoulos<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>The deployment of a web-scale application within a datacenter can comprise of hundreds of software components, deployed on thousands of servers organized in multiple tiers and interconnected by commodity Ethernet switches. These versatile components communicate with each other via Remote Procedure Calls (RPCs) with the cost of an individual RPC service typically measured in microseconds. The end-user performance, availability and overall efficiency of the entire system are largely dependent on the efficient delivery and scheduling of these RPCs. Yet, these RPCs are ubiquitously deployed today on top of general-purpose transport protocols such as TCP. We propose to make RPC first-class citizens of datacenter deployment. This requires a revisitation of the overall architecture, application API, and network protocols. Our research direction is based on a novel RPC-oriented protocol, R2P2, which separates control flow from data flow and provides in-networking scheduling opportunities to tame tail latency. We are also building the tools that are necessary to scientifically evaluate microsesecond-scale services.<\/p>\n\n\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"eth-zurich-projects-2019-2021\">ETH Zurich projects (2019-2021)<\/h3>\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/asl.ethz.ch\/the-lab\/people\/person-detail.html?persid=29981\" target=\"_blank\" rel=\"noopener noreferrer\">Roland Siegwart<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/n.ethz.ch\/~cesarc\/\" target=\"_blank\" rel=\"noopener noreferrer\">Cesar Cadena<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes Sch\u00f6nberger<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/lukas-schmid-9b9711179\/\">Lukas Schmid<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>AR\/VR allow new and innovative ways of visualizing information and provide a very intuitive interface for interaction. At their core, they rely only on a camera and inertial measurement unit (IMU) setup or a stereo-vision setup to provide the necessary data, either of which are readily available on most commercial mobile devices. Early adoptions of this technology have already been deployed in the real estate business, sports, gaming, retail, tourism, transportation and many other fields. The current technologies in visual-aided motion estimation and mapping on mobile devices have three main requirements to produce highly accurate 3D metric reconstructions: An accurate spatial and temporal calibration of the sensor suite, a procedure which is typically carried out with the help of external infrastructure, like calibration markers, and by following a set of predefined movements. Well-lit, textured environments and feature-rich, smooth trajectories. The continuous and reliable operation of all sensors involved. This project aims at relaxing these requirements, to enable continuous and robust lifelong mapping on end-user mobile devices. Thus, the specific objectives of this work are: 1. Formalize a modular and adaptable multi-modal sensor fusion framework for online map generation; 2. Improve the robustness of mapping and motion estimation by exploiting high-level semantic features; 3. Develop techniques for automatic detection and execution of sensor calibration in the wild. A modular SLAM (simultaneous localization and mapping) pipeline which is able to exploit all available sensing modalities can overcome the individual limitations of each sensor and increase the overall robustness of the estimation. Such an information-rich map representation allows us to leverage recent advances in semantic scene understanding, providing an abstraction from low-level geometric features \u2013 which are fragile to noise, sensing conditions and small changes in the environment \u2013 to higher-level semantic features that are robust against these effects. Using this complete map representation, we will explore new ways to detect miscalibrations and sensor failures, so that the SLAM process can be adapted online without the need for explicit user intervention.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/inf.ethz.ch\/people\/people-atoz\/person-detail.zhang.html\" target=\"_blank\" rel=\"noopener noreferrer\">Ce Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PI:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/interesaaat.github.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Matteo Interlandi<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/bojankarlas\/\">Bojan Karla\u0161<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>The goal of this project is to mine ML.NET historical data such as user telemetry and logs to understand how ML.NET transformations and learners are used and eventually being able to use this knowledge to automatically provide suggestions to data scientists using ML.NET.Suggestions can be in the form of: Better or additional recipes for unexplored tasks (e.g., neural networks). Auto-completion suggestions for pipelines directly authored for example in .NET or Python.Automatically generation of parameters and sweep strategies optimal for the task at hand. We will try to develop a solution that is extensible such that, if new tasks, algorithms, etc. are added to the library, suggestions will be eventually properly upgraded as well. Additionally, the tool will have to interface with ML.NET and make easy to add new recipes coming either from users or the log mining tool.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/inf.ethz.ch\/people\/person-detail.MjYyNzgw.TGlzdC8zMDQsLTg3NDc3NjI0MQ==.html\" target=\"_blank\" rel=\"noopener noreferrer\">Siyu Tang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/inf.ethz.ch\/people\/people-atoz\/person-detail.MjQyOTY2.TGlzdC8zMDQsLTIxNDE4MTU0NjA=.html\">Siwei Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>Humans are social beings and frequently interacting with one another, e.g. spending a large amount of their time being socially engaged, working in teams, or just being as part of the crowd. Understanding human interaction from visual input is an important aspect of visual cognition and key to many applications including assistive robotics, human-computer interaction and AR\/VR. Despite rapid progresses in estimating 3D pose and shape of a single person from RGB images, capturing and modelling human interactions is rather poorly studied in the literature. Particularly for the first-person-view settings, the problem has drawn little attention from the computer vision community. We argue that it is essential for the augmented reality glasses, e.g. Microsoft HoloLens, to capture and model the interactions between the camera wearer and others as the interaction between humans characterises how they move, behave and perform tasks in a collaborative setting.<\/p>\n\n\n\n<p>In this project, we aim to understand how to recognise and predict the interactions between humans under the first-person view setting. To that end, we will create a 3D human-human interaction dataset where the goal is to capture rich and complex interaction signals including body and hand poses, facial expression and gaze directions using Microsoft Kinect and HoloLens. We will develop models that can recognise the dynamics of human interactions and even predict the motion and activities of the interacting humans. We believe such models will facilitate various down-streaming applications for the augmented reality glasses, e.g. Microsoft HoloLens.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/asl.ethz.ch\/the-lab\/people\/person-detail.html?persid=29981\" target=\"_blank\" rel=\"noopener noreferrer\">Roland Siegwart<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/asl.ethz.ch\/the-lab\/people\/person-detail.html?persid=244173\" target=\"_blank\" rel=\"noopener noreferrer\">Nicholas Lawrance<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/asl.ethz.ch\/the-lab\/people\/person-detail.MjQ0MTg0.TGlzdC8xNTg0LDEyMDExMzk5Mjg=.html\" target=\"_blank\" rel=\"noopener noreferrer\">Jen Jen Chung<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/akolobov\/\">Andrey Kolobov<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dedey\/\">Debadeepta Dey<\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/asl.ethz.ch\/the-lab\/people\/person-detail.MTgwNDE5.TGlzdC8yMDMwLDEyMDExMzk5Mjg=.html\" target=\"_blank\" rel=\"noopener noreferrer\">Florian Achermann<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>A major factor restricting the utility of UAVs is the amount of energy aboard, which limits the duration of their flights. Birds face largely the same problem, but they are adept at using their vision to aid in spotting \u2014 and exploiting \u2014 opportunities for extracting extra energy from the air around them. Project Altair aims at developing infrared (IR) sensing techniques for detecting, mapping and exploiting naturally occurring atmospheric phenomena called thermals for extending the flight endurance of fixed-wing UAVs. In this presentation, we will introduce our vision and goals for this project.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/inf.ethz.ch\/people\/people-atoz\/person-detail.hoefler.html\" target=\"_blank\" rel=\"noopener noreferrer\">Torsten Hoefler<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/itp.phys.ethz.ch\/people\/person-detail.html?persid=59275\" target=\"_blank\" rel=\"noopener noreferrer\">Renato Renner<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mtroyer\/\">Matthias Troyer<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/martinro\/\">Martin Roetteler<\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/inf.ethz.ch\/people\/person-detail.MjEyMDA3.TGlzdC8zMDQsLTg3NDc3NjI0MQ==.html\" target=\"_blank\" rel=\"noopener noreferrer\">Niels Gleinig<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>QIRO will establish a new internal representation for compilation systems on quantum computers. Since quantum computation is still emerging, I will provide an introduction to the general concepts of quantum computation and a brief discussion of its strengths and weaknesses from a high-performance computing perspective. This talk is tailored for a computer science audience with basic (popular-science) or no background in quantum mechanics and will focus on the computational aspects. I will also discuss systems aspects of quantum computers and how to map quantum algorithms to their high-level architecture. I will close with the principles of practical implementation of quantum computers and outline the project.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/inf.ethz.ch\/people\/person-detail.krause.html\" target=\"_blank\" rel=\"noopener noreferrer\">Andreas Krause<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kahofman\/\">Katja Hofmann<\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/davnlin\/\">David Lindner<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>Reinforcement learning (RL) is a promising paradigm in machine learning and gained considerable attention in recent years, partly because of its successful application in previously unsolved challenging games like Go and Atari. While these are impressive results, applying reinforcement learning in most other domains, e.g. virtual personal assistants, self-driving cars or robotics, remains challenging. One key reason for this is the difficulty of specifying the reward function a reinforcement learning agent is intended to optimize. For instance, in a virtual personal assistant, the reward function might correspond to the user\u2019s satisfaction with the assistant\u2019s behavior and is difficult to specify as a function of observations (e.g. sensory information) available to the system. In such applications, an alternative to specifying the reward function is to actually query the user for the reward. This, however, is only feasible if the number of queries to the user are limited and the user\u2019s response can be provided in a natural way such that the system\u2019s queries are non-irritating. Similar problems arise in other application domains such as robotics in which, for instance, the true reward can only be obtained by actually deploying the robot but an approximation to the reward can be computed by a simulator. In this case, it is important to optimize the agent\u2019s behavior while simultaneously minimizing the number of costly deployments. This project\u2019s aim is to develop algorithms for these types of problems via scalable active reward learning for reinforcement learning. The project\u2019s focus is on scalability in terms of computational complexity (to scale to large real-world problems) and sample complexity (to minimize the number of costly queries).<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/crl.ethz.ch\/people\/coros\/index.html\" target=\"_blank\" rel=\"noopener noreferrer\">Stelian Coros<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/people.inf.ethz.ch\/poranner\/\" target=\"_blank\" rel=\"noopener noreferrer\">Roi Poranne<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/butekin\/\">Bugra Tekin<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><br><strong>PhD Students:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/simon-zimmermann-63621b99\/\">Simon Zimmermann<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>With this project, we aim to accelerate the development of intelligent robots that can assist those in need with a variety of everyday tasks. People suffering from physical impairments, for example, often need help dressing or brushing their own hair. Skilled robotic assistants would allow these persons to live an independent lifestyle. Even such seemingly simple tasks, however, require complex manipulation of physical objects, advanced motion planning capabilities, as well as close interactions with human subjects. We believe the key to robots being able to undertake such societally important functions is learning from demonstration. The fundamental research question is, therefore, how can we enable human operators to seamlessly teach a robot how to perform complex tasks? The answer, we argue, lies in immersive telemanipulation. More specifically, we are inspired by the vision of James Cameron\u2019s Avatar, where humans are endowed with alternative embodiments. In such a setting, the human\u2019s intent must be seamlessly mapped to the motions of a robot as the human operator becomes completely immersed in the environment the robot operates in. To achieve this ambitious vision, many technologies must come together: mixed reality as the medium for robot-human communication, perception and action recognition to detect the intent of both the human operator and the human patient, motion retargeting techniques to map the actions of the human to the robot\u2019s motions, and physics-based models to enable the robot to predict and understand the implications of its actions.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/inf.ethz.ch\/people\/people-atoz\/person-detail.holz.html\" target=\"_blank\" rel=\"noopener noreferrer\">Christian Holz<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kenh\/\">Ken Hinckley<\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/hugo-romat-3a64b188\/\" target=\"_blank\" rel=\"noopener noreferrer\">Hugo Romat<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>Over the past dozen years, touch input \u2013 seemingly well-understood \u2013 has become the predominate means of interacting with devices such as smartphones, tablets, and large displays. Yet we argue that much remains unknown \u2013 in the form of a seen but unnoticed vocabulary of natural touch \u2013 that suggests tremendous untapped potential. For example, touchscreens remain largely ignorant of the human activity, manual behavior, and context-of-use beyond the moment of finger-contact with the screen itself. In a sense, status quo interactions are trapped in a flatland of touch, while systems remain oblivious to the vibrant world of human behavior, activity, and movement that surrounds them.We posit that an entire vocabulary of naturally-occurring gestures \u2013 both in terms of the activity of the hands, as well as the subtle corresponding motion and compensatory movements of the devices themselves \u2013 exists in plain sight.Our intended outcome is creating a conceptual understanding as well as a deployable interactive system, both of which blend the naturally-occurring gestures \u2013 interactions users embody through their actions \u2013 with the explicit input through traditional touch operation.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/ee.ethz.ch\/the-department\/people-a-z\/person-detail.MjIyOTQ5.TGlzdC8zMjc5LC0xNjUwNTg5ODIw.html\" target=\"_blank\" rel=\"noopener noreferrer\">Onur Mutlu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/kvaid\/\">Kushagra Vaid<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/terry-grunzke-b235a95\/\" target=\"_blank\" rel=\"noopener noreferrer\">, Terry Grunzke,&nbsp;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/derek-chiou-58381\/\">Derek Chiou<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>PhD Student:&nbsp;<\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.linkedin.com\/in\/loisorosa\/\" target=\"_blank\" rel=\"noopener noreferrer\">Lois Orosa<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p>This project examines the architecture and management of next-generation data center storage devices within the context of realistic data-intensive workloads. The aim is to investigate novel techniques that can greatly improve performance, cost, and efficiency in real world systems with real world applications, breaking the barriers between the applications and devices, such that the software can much more effectively and efficiently manage the underlying storage devices that consist of (potentially different types of) flash memory, emerging SCM (storage class memory) technologies, and (potentially different types of) DRAM memories. We realize that there is a disconnect in the communication between applications\/software and the NVM devices: the interfaces and designs we currently have enable little communication of useful information from the application\/software level (including the kernel) to the NVM devices, and vice versa, causing significant performance and efficiency loss and likely fueling higher \u201cmanaged\u201d storage device costs because applications cannot even communicate their requirements to the devices. We aim to fundamentally examine the software-NVM interfaces as well as designs for the underlying storage devices to minimize the disconnect in communication and empower applications and system software to more effectively manage the underlying devices, optimizing important system-level metrics that are of interest to the system designer or the application (at different points in time of execution).<\/p>\n\n\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"epfl-projects-2017-2018\">EPFL projects (2017-2018)<\/h3>\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;Babak Falsafi, Martin Jaggi<br><strong>Microsoft Co-PI:<\/strong>&nbsp;Eric Chung<\/p>\n\n\n\n<p>Deep Neural Networks (DNNs) have emerged as algorithms of choice for many prominent machine learning tasks, including image analysis and speech recognition. In datacenters, DNNs are trained on massive datasets to improve prediction accuracy. While the computational demands for performing online inference in an already trained DNN can be furnished by commodity servers, training DNNs often requires computational density that is orders of magnitude higher than that provided by modern servers. As such, operators often use dedicated clusters of GPUs for training DNNs. Unfortunately, dedicated GPU clusters introduce significant additional acquisition costs, break the continuity and homogeneity of datacenters, and are inherently not scalable. FPGAs are appearing in server nodes either as daughter cards (e.g., Catapult) or coherent sockets (e.g., Intel HARP) providing a great opportunity to co-locate inference and training on the same platform. While these designs enable natural continuity for platforms, co-locating inference and training on a single node faces a number of key challenges. First, FPGAs inherently suffer from low computational density. Second, conventional training algorithms do not scale due to inherent high communication requirements. Finally, co-location may lead to contention requiring mechanisms to prioritize inference over training. In this project, we will address these fundamental challenges in DNN inference\/training co-location on servers with integrated FPGAs. Our goals are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Redesign training and inference algorithms to take advantage of DNNs inherent tolerance for low precision operations.<\/li>\n\n\n\n<li>Identify good candidates for hard-logic blocks for the next generations of FPGAs.<\/li>\n\n\n\n<li>Redesign DNN training algorithms to aggressively approximate and compress intermediate results, to target communication bottlenecks and scale the training of single networks to an arbitrary number of nodes.<\/li>\n\n\n\n<li>Implement FPGA-based load balancing techniques in order to provide latency guarantees for inference tasks under heavy loads and enable the use of idle accelerator cycles to train networks when operating under lower loads.<\/li>\n<\/ul>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;Michael Kapralov, Ola Svensson<br><strong>Microsoft Co-PIs:<\/strong>&nbsp;Yuval Peres,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nikdev\/\">Nikhil Devanur<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sebubeck\/\">Sebastien Bubeck<\/a><\/p>\n\n\n\n<p>The task of grouping data according to similarity is a basic computational task with numerous applications. The right notion of similarity often depends on the application and different measures yield different algorithmic problems. The goal of this project is to design faster and more accurate algorithms for fundamental clustering problems such as the k-means problem, correlation clustering and hierarchical clustering. We propose to perform a fine grained study of these problems and design algorithms that achieve optimal trade-offs between approximation quality, runtime and space\/communication complexity, making our algorithms well-suited for modern data models such as streaming and MapReduce.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/people.epfl.ch\/pascal.fua\" target=\"_blank\" rel=\"noopener noreferrer\">Pascal Fua<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/people.epfl.ch\/Mathieu.Salzmann\" target=\"_blank\" rel=\"noopener noreferrer\">Mathieu Salzmann<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft Co-PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dedey\/\">Debadeepta Dey<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/akapoor\/\">Ashish Kapoor<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta Sinha<\/a><\/p>\n\n\n\n<p>Several companies are now launching drones that autonomously follow and film their owners, often by tracking a GPS device they are carrying. This holds the promise to fundamentally change the way in which drones are used by allowing them to bring back videos of their owners performing activities, such as playing sports, unimpeded by the need to control the drone. In this project, we propose to go one step further and turn the drone into a personal trainer that will not only film but also analyse the video sequences and provide advice on how to improve performance. For example, a golfer could be followed by such a drone that will detect when he swings and offer advice on how to improve the motion. Similarly, a skier coming down a slope could be given advice on how to better turn and carve. In short, the drone would replace the GoPro-style action cameras that many people now carry when exercising. Instead of recording what they see, it would film them and augment the resulting sequences with useful advice. To make this solution as lightweight as possible, we will strive to achieve this goal using the on-board camera as the sole sensor and free the user from the need to carry a special device that the drone locks onto. This will require:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Detecting the subject in the video sequences acquired by the drone so as to keep him in the middle of its field of view. This must be done in real-time and integrated into the drone\u2019s control system.<\/li>\n\n\n\n<li>Recovering the subject\u2019s 3D pose as he moves from the drone\u2019s videos. This can be done with a slight delay since the critique only has to be provided once the motion has been performed.<\/li>\n\n\n\n<li>Providing feedback. In both the golf and ski cases, this would mean quantifying leg, hips, shoulders, and head position during a swing or a turn, offering practical suggestions on how to change them, and showing how an expert would have performed the same action.<\/li>\n<\/ul>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PI:<\/strong>&nbsp;Babak Falsafi<br><strong>Microsoft Co-PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/svolos\/\">Stavros Volos<\/a><\/p>\n\n\n\n<p>Near-memory processing (NMP) is a promising approach to satisfy the performance requirements of modern datacenter services at a fraction of modern infrastructure\u2019s power. NMP leverages emerging die-stacked DRAM technology, which (a) delivers high-bandwidth memory access, and (b) features a logic die, which provides the opportunity for dramatic data movement reduction \u2013 and consequently energy savings \u2013 by pushing computation closer to the data. In the precursor to this project (the MSR European PhD Scholarship), we evaluated algorithms suitable for database join operators near memory. We showed, while sort join has been conventionally thought of as inferior to hash join in performance for CPUs, near-memory processing favors sequential over random memory access, making sort join superior in performance and efficiency as a near-memory service. In this project, we propose to answer the following questions:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>What data-specific functionality should be implemented near memory (e.g., data filtering, data reorganization, data fetch)?<\/li>\n\n\n\n<li>What ubiquitous, yet simple system-level functionality should be implemented near memory (e.g., security, compression, remote memory access)?<\/li>\n\n\n\n<li>How should the services be integrated with the system (e.g., how does the software use them)?<\/li>\n\n\n\n<li>How do we employ near-threshold logic in near-memory processing?<\/li>\n<\/ul>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;Rachid Guerraoui, Georgios Chatzopoulos<br><strong>Microsoft Co-PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/alekd\/\">Aleksandar Dragojevic<\/a><\/p>\n\n\n\n<p>Modern hardware trends have changed the way we build systems and applications. Increasing memory (DRAM) capacities at reduced prices make keeping all data in-memory cost-effective, presenting opportunities for high performance applications such as in-memory graphs with billions of edges (e.g. Facebook\u2019s TAO). Non-Volatile RAM (NVRAM) promises durability in the presence of failures, without the high price of disk accesses. Yet, even with this increase in inexpensive memory, storing the data in the memory of one machine is still not possible for applications that operate on TB of data, and systems need to distribute the data and synchronize accesses among machines. This project proposes the design and building of support for high-level transactions on top of modern hardware platforms, using the Structured Query Language (SQL). The important question to be answered is whether transactions can get the maximum benefit of these modern networking and hardware capabilities, while offering a significantly easier interface for developers to work with. This project will require both research in the transactional support to be offered, including the operations that can be efficiently supported, as well as research in the execution plans for transactions in this distributed setting.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">EPFL PI:<\/strong>&nbsp;Florin Dinu<br><strong>Microsoft PIs:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chrisgk\/\">Christos Gkantsidis<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/serleg\/\">Sergey Legtchenko<\/a><\/p>\n\n\n\n<p>The goal of our project is to improve the utilization of server resources in data centers. Our proposed approach was to attain a better understanding of the resource requirements of data-parallel applications and then incorporate this understanding into the design of more informed and efficient data center (cluster) schedulers. While pursuing these directions we have identified two related challenges that we believe hold the key towards significant additional improvements in application performance as well as cluster-wide resource utilization. We will explore these two challenges as a continuation of our project. These two challenges are: Resource inter-dependency and time-varying resource requirements. Resource inter-dependency refers to the impact that a change in the allocation of one server resource (memory, CPU, network bandwidth, disk bandwidth) to an application has on that application\u2019s need for the other resources. Time-varying resource requirements refers to the fact that over the lifetime of an application its resource requirements may vary. Studying these two challenges together holds the potential for improving resource utilization by aggressively but safely collocating applications on servers.<\/p>\n\n\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"eth-zurich-projects-2017-2018\">ETH Zurich projects (2017-2018)<\/h3>\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;Gustavo Alonso<br><strong>Microsoft Co-PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/eguro\/\">Ken Eguro<\/a><\/p>\n\n\n\n<p>While in the first phase of the project we explored the efficient implementation of data processing operators in FPGAs as well as the architectural issues involved in the integration of FPGAs as co-processors in commodity servers, in this new proposal we intend to focus on architectural aspects of in-network data processing. The choice is motivated by the growing gap between the bandwidth and very low latencies that modern networks support and the overhead of ingress and egress from VMs and applications running on conventional CPUs. A first goal is to explore the type of problems and algorithms that can be best run as the data flows through the network so as to be able to exploit the bare wire speed and allow off-loading of expensive computations to the FPGA. A second, but not less important goal, is to explore how to best operate FPGA based accelerators when directly connected to the network and operating independently from the software part of the application. In terms of applications, the focus will remain on data processing (relational, No-SQL, data warehouses, etc.) with the intention of starting to move towards machine learning algorithms at the end of the two-year project. On the network side, the project will work on developing networking protocols suitable to this new configuration and how to combine the network stack with the data processing stack.<\/p>\n\n\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PIs:<\/strong>&nbsp;Onur Mutlu, Luca Benini<br><strong>Microsoft Co-PI:<\/strong>&nbsp;Derek Chiou<\/p>\n\n\n\n<p>Today\u2019s systems are overwhelmingly designed to move data to computation. This design choice goes directly against key trends in systems and technology that cause performance, scalability and energy bottlenecks:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>data access from memory is a key bottleneck as applications become more data-intensive and memory bandwidth and energy do not scale well,<\/li>\n\n\n\n<li>energy consumption is a key constraint in especially mobile and server systems,<\/li>\n\n\n\n<li>data movement is very costly in terms of bandwidth, energy and latency, much more so than computation.<\/li>\n<\/ul>\n\n\n\n<p>Our goal is to comprehensively examine the premise of adaptively performing computation near where the data resides, when it makes sense to do so, in an implementable manner and considering multiple new memory technologies, including 3D-stacked memory and non-volatile memory (NVM). We will examine practical hardware substrates and software interfaces to accelerate key computational primitives of modern data-intensive applications in memory, runtime and software techniques that can take advantage of such substrates and interfaces. Our special focus will be on key data-intensive applications, including deep learning, neural networks, graph processing, bioinformatics (DNA sequence analysis and assembly), and in-memory data stores. Our approach is software\/hardware cooperative, breaking the barriers between the two and melding applications, systems and hardware substrates for extremely efficient execution, while still providing efficient interfaces to the software programmer.<\/p>\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;Otmar Hilliges<br><strong>Microsoft Co-PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><\/p>\n\n\n\n<p>Micro-aerial vehicles (MAVs) have been made accessible to end-users via the emergence of simple to use hardware and programmable software platforms and have seen a surge in consumer and research interest as a consequence. Clearly there is a desire to use such platforms in a variety of application scenarios but manually flying quadcopters remains a surprisingly hard task even for expert users. More importantly, state-of-the-art technologies offer only very limited support for users who want to employ MAVs to reach a certain high-level goal. This is maybe best illustrated by the currently most successful application area \u2013 that of aerial videography. While manual flight is hard, piloting and controlling a camera simultaneously is practically impossible. An alternative to manual control is offered via waypoint based control of MAVs, shielding novices from the underlying complexities. However, this simplicity comes at the cost of flexibility and existing flight planning tools are not designed with high-level user goals in mind. Building on our own (MSR JRC funded) prior work, we propose an alternative approach to robotic motion planning. The key idea is to let the user work in solution-space \u2013 instead of defining trajectories the user would define what the resulting output should be (e.g., shot composition, transitions, area to reconstruct). We propose an optimization-based approach that takes such high-level goals as input and generates the trajectories and control inputs for a gimbal mounted camera automatically. We call this solution-space driven, inverse kinematic motion planning. Defining the problem directly in the solution space removes several layers of indirection and allows users to operate in a more natural way, focusing only on the application specific goals and the quality of the final result, whereas the control aspects are entirely hidden.<\/p>\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PIs:<\/strong>&nbsp;Thomas Hofmann, Aur\u00e9lien Lucchi<br><strong>Microsoft Co-PI:<\/strong>&nbsp;Sebastian Nowozin<\/p>\n\n\n\n<p>The past decade has seen a growth in application of big data and machine learning systems. Probabilistic models of data are theoretically well understood and in principle provide an optimal approach to inference and learning from data. However, for richly structured data domains such as natural language and images, probabilistic models are often computationally intractable and\/or have to make strong conditional independence assumptions to retain computational as well as statistical efficiency. As a consequence, they are often inferior in predictive performance, when compared to current state-of-the-art deep learning approaches. It is a natural question to ask, whether one can combine the benefits of deep learning with those of probabilistic models. The major conceptual challenge is to define deep models that are generative, i.e. that can be thought of as models of the underlying data generating mechanism. We thus propose to leverage and extend recent advances in generative neural networks to build rich probabilistic models for structured domains such as text and images. The extension of efficient probabilistic neural models will allow us to represent complex and multimodal uncertainty efficiently. To demonstrate the usefulness of the developed probabilistic neural models we plan to apply them to challenging multimodal applications such as creating textual descriptions for images or database records.<\/p>\n\n\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"epfl-projects-2014-2016\">EPFL projects (2014-2016)<\/h3>\n\n\n\n\n\n<p><strong class=\"\">EPFL PI:<\/strong>&nbsp;Serge Vaudenay<br><strong>Microsoft PI:<\/strong>&nbsp;Markulf Kohlweiss<\/p>\n\n\n\n<p>For an encryption scheme to be practically useful, it must deliver on two complementary goals: the confidentiality and integrity of encrypted data. Historically, these goals were achieved by combining separate primitives, one to ensure confidentiality and another to guarantee integrity. This approach is neither the most efficient (for instance, it requires processing the input stream at least twice), nor does it protect against implementation errors. To address these concerns, the notion of Authenticated Encryption (AE), which simultaneously achieves confidentiality and integrity, was put forward as a desirable first-class primitive to be exposed by libraries and APIs to the end developer. Providing direct access to AE rather than requiring developers to orchestrate calls to several lower-level functions is seen as a step towards improving the quality of security-critical code. An indication of both the importance of useable AE and the difficulty of getting it right, are the number of standards that were developed over the years. These specified different methods for AE: the CCM method is specified in IEEE 802.11i, IPsec ESP, and IKEv2; the GCM method is specified in NIST SP 800-38D; the EAX method is specified in ANSI C12.22; and ISO\/IEC 19772:2009 defines six methods, including five dedicated AE designs and one generic composition method, namely Encrypt-then-MAC. Several security issues have recently arisen and been reported in the (mis)use of symmetric key encryption with authentication in practice. As a result, the cryptographic community has initiated the Competition for Authenticated Encryption: Security, Applicability, and Robustness (CAESAR), to boost public discussions towards a better understanding of these issues, and to identify a portfolio of efficient and secure AE schemes. Our project aims to contribute to the design, analysis, evaluation, and classification of the emerging AE schemes during the CAESAR competition. It has effected many practical security protocols that use AE schemes as indispensable underlying primitives. Our work has broader implications for the theory of AE as an important research area in symmetric-key cryptography.<\/p>\n\n\n\n\n\n<p><strong class=\"\">EPFL PIs:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/people.epfl.ch\/edouard.bugnion\" target=\"_blank\" rel=\"noopener noreferrer\">Edouard Bugnion<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Babak Falsafi<br><strong>Microsoft PI:<\/strong>&nbsp;Dushyanth Naraya<\/p>\n\n\n\n<p>The goal of the Scale-Out NUMA project is to deliver energy-efficient, low-latency access to remote memory in datacentre applications, with a focus on rack-scale deployments. Such infrastructure will become critical for both web-scale only applications as well as scale-out analytics where the dataset can reside in the collective (but distributed) memory of a cluster of servers. Our approach to the problem layers an RDMA-inspired programming model directly on top of a NUMA fabric via stateless messaging protocol. To facilitate interactions between the application, the OS and the fabric, soNUMA relies on the remote memory controller \u2013 a new architecturally-exposed hardware block integrated into the node\u2019s local coherence hierarchy.<\/p>\n\n\n\n\n\n<p><strong class=\"\">EPFL PI:<\/strong>&nbsp;Florin Dinu<br><strong>Microsoft PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/serleg\/\">Sergey Legtchenko<\/a><\/p>\n\n\n\n<p>Our vision is of resource-efficient datacenters where the compute nodes are fully utilized. We see two challenges to manifesting this vision. The first is the increasing use of hardware heterogeneity in datacenters. Heterogeneity, while both unavoidable and desirable, does not lend itself to today\u2019s systems and algorithms, which inefficiently handle heterogeneity. The second challenge is the aggressive scale-out of datacenters. Scale-out has made it conveniently easy to disregard inefficiencies at the level of individual compute nodes because it has been historically easy to expand to new resources. However, apart from being unnecessarily costly, such scale-out techniques are now becoming impractical due to the size of the datasets. Moreover, scale-out often adds new inefficiencies. We argue that to meet these challenges, we must start from a thorough understanding of the resource requirements of today\u2019s datacenter jobs. With this understanding, we aim to design new scheduling techniques that efficiently use resources, even in heterogeneous environments. Further, we aim to fundamentally change the way data-parallel processing systems are built and to make efficient compute node resource utilization a cornerstone of their design. Our first goal is to automatically characterize the pattern of memory requirements of data-parallel jobs. Specifically, we want to go beyond the current practices that are interested only in peak memory usage. To better identify opportunities for efficient memory management, more granular information is necessary. Our second goal is to use knowledge of the pattern of memory requirements to design informed scheduling algorithms that manage memory efficiently. The third goal of the project is to design data-parallel processing systems that are efficient in terms of managing memory, not only by understanding task memory requirements, but also by shaping those memory requirements.<\/p>\n\n\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"eth-zurich-projects-2014-2016\">ETH Zurich projects (2014-2016)<\/h3>\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/inf.ethz.ch\/people\/people-atoz\/person-detail.hoefler.html\" target=\"_blank\" rel=\"noopener noreferrer\">Torsten Hoefler<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br><strong>Microsoft PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mcastro\/\">Miguel Castro<\/a><\/p>\n\n\n\n<p>Disk-backed in-memory key\/value stores are gaining significance as many industries are moving toward big data analytics. Storage space and query time requirements are challenging, since the analysis has to be performed at the lowest cost to be useful from a business perspective. Despite those cost constraints, today\u2019s systems are heavily overprovisioned when it comes to resiliency. The undifferentiated three-copy approach leads to a potential waste of bandwidth and storage resources, which then makes the overall system less efficient or more expensive. We propose to revisit currently used resiliency schemes, with the help of analytical hardware failure models. We will utilize those models to capture the exact tradeoff between the overhead due to replication and the exact resiliency requirements that are defined in a contract. Our key idea is to model reliability as an explicit resource that the user allocates consciously. In previous work, we have been able to speed-up scientific computing applications, as well as a distributed hashtable, on several hundred-thousand cores by more than 20 percent, with the use of advanced RDMA programming techniques. We have also demonstrated low-cost resiliency schemes based on erasure coding for RDMA environments. In addition, we propose to apply our experience with large-scale RDMA programming to the design of in-memory databases, a problem very similar to distributed hashtables. To make reliability explicit, we plan to extend the key value store with explicit reliability attributes that allow the user to specify reliability and availability requirements for each key (or group of keys). Our work may change the perspective in datacenter resiliency. Defining fine-grained, per-object resiliency levels and tuning them to the exact environment may provide large cost benefits and impact industry. For example, changing the standard three-replica scheme to erasure coding can easily save 30 percent of storage expenses.<\/p>\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;Gustavo Alonso<br><strong>Microsoft PI:<\/strong>&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/eguro\/\">Ken Eguro<\/a><\/p>\n\n\n\n<p>One of the biggest challenges for software these days is to adapt to the rapid changes in hardware and processor architecture. On the one hand, extracting performance from modern hardware requires dealing with increasing levels of parallelism. On the other hand, the wide variety of architectural possibilities and multiplicity of processor types raise many questions in terms of the optimal platform for deploying applications. In this project we will explore the efficient implementation of data processing operators in FPGAs, as well as the architectural issues involved in the integration of FPGAs as co-processors in commodity servers. The target application is big data and data processing engines (relational, No-SQL, data warehouses, etc.). Through this line of work, the project aims at exploring architectures that will result in computing nodes with a smaller energy consumption and physical size, but capable of providing a performance boost to applications for big data. FPGAs should be seen here not as a goal in themselves, but as an enabling platform for the exploration of different architectures and levels of parallelism that will allow us to bypass the inherent restriction of conventional processors. On the practical side, the project will focus on both the use of FPGAs as co-processors inside existing engines, as well as on developing proof-of-concept implementations of data processing engines entirely implemented in FPGAs. In this area, the project complements very well with ongoing efforts at Microsoft Research around Cipherbase, a trusted computing system based on SQL server deployments in the cloud. On the conceptual side, the project will explore the development of data structures and algorithms capable of exploiting the massive parallelism available in FPGAs, with a view to gaining much needed insights on how to adapt existing data processing systems to multi- and many-core architectures. Here, we expect to gain insights on how to redesign both standard relational data operators, as well as data mining and machine learning operators, to better take advantage of the increasing amounts of parallelism available in future processors.<\/p>\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PIs:<\/strong>&nbsp;Otmar Hilliges,&nbsp;Marc Pollefeys<br><strong>Microsoft PI:<\/strong>&nbsp;Shahram Izadi<\/p>\n\n\n\n<p>In recent years, robotics research has made tremendous progress and it is becoming conceivable that robots will be as ubiquitous and irreplaceable in our daily lives as they are within industrial settings. Continued improvements, in terms of mechatronics and control aspects, coupled with continued advances in consumer electronics, have made robots ever smaller, autonomous, and agile. One area of recent advances in robotics is the notion of micro-aerial vehicles (MAVs) [14, 16]. These are small, flying robots that are very agile, can operate in a 3D space, indoors and outdoors, and can carry small payloads \u2014 including input and output devices \u2014 and can navigate difficult environments, such as stairs, more easily than terrestrial robots; and hence can reach locations that no other robot or indeed humans can reach. Surprisingly, to date there is little research on such flying robots in an interactive context or on MAVs operating in near proximity to humans. In our project, we explore the opportunities that arise from aerial robots that operate in close proximity to and in collaboration with a human user. In particular, we are interested in developing a robotic platform in which a) the robot is aware of the human user and can navigate relative to the user; b) the robot can recognize various gestures from afar, as well as receive direct, physical manipulations; c) the robot can carry small payloads \u2014 in particular input and output devices such as additional cameras or projectors. Finally, we are developing novel algorithms to track and recognize user input, by using the onboard cameras, in real-time and with very low-latency, to build on the now substantial body of research on gestural and natural interfaces. Gesture recognition can be used for MAV control (for example, controlling the camera) or to interact with virtual content.<\/p>\n\n\n\n\n\n<p><strong class=\"\">ETH Zurich PI:<\/strong>&nbsp;Roger Wattenhofer<br><strong>Microsoft PI:<\/strong>&nbsp;Ratul Mahajan<\/p>\n\n\n\n<p>The Internet is designed as a robust service to ensure that we can use it with selfish participants present. As such, a loss in total performance must be accepted. However, if a whole wide-area network (WAN) was controlled by a single entity, why should one use the very techniques designed for the Internet? Large providers such as Microsoft, Amazon, or Google operate their own WANs, which cost them hundreds of millions of dollars per year; yet even their busier links average only 40\u201360 percent utilization. This gives rise to Software Defined Networks (SDNs), which allow the separation of the data and the control plane in a network. A centralized controller can install and update rules all over the WAN, to optimize its goals. Despite SDNs receiving a lot of attention in both theory and practice, many questions are still unanswered. Even though the control of the network is centralized, distributing the updates does not happen instantaneously. Numerous problems can occur, such as the dropping of packets, generation of loops, breaking the memory\/bandwidth limit of switches\/links, and missing packet coherence. These problems must be solved before SDNs can be broadly deployed. This research project sheds more light on these fundamental issues of SDNs and how they can be tackled. In parallel, we look at SDNs from a game-theoretic perspective.<\/p>\n\n\n\n\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Microsoft Research Swiss Joint Research Center was established on June 2013 as the renewal of ICES (Innovation Cluster for Embedded Software), a collaborative research engagement between ETH Zurich, \u00c9cole Polytechnique Federale de Lausanne (EPFL), and Microsoft Research. Their shared vision is that the center undertake the toughest computer science challenges in areas as diverse as human-computer interaction, machine vision, performance and energy scalability, mobile computing, and data center optimization.<\/p>\n","protected":false},"featured_media":665967,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_group_start":"","footnotes":""},"research-area":[13562,13554],"msr-group-type":[243721],"msr-locale":[268875],"msr-impact-theme":[],"class_list":["post-611553","msr-group","type-msr-group","status-publish","has-post-thumbnail","hentry","msr-research-area-computer-vision","msr-research-area-human-computer-interaction","msr-group-type-collaboration","msr-locale-en_us"],"msr_group_start":"","msr_detailed_description":"","msr_further_details":"","msr_hero_images":[],"msr_research_lab":[199561,602418],"related-researchers":[{"type":"guest","display_name":"Florian Achermann","user_id":691248,"people_section":"Current Students","alias":""},{"type":"guest","display_name":"Niels Gleinig","user_id":691257,"people_section":"Current Students","alias":""},{"type":"guest","display_name":"Kristina Gligoric","user_id":691239,"people_section":"Current Students","alias":""},{"type":"guest","display_name":"Bojan Karla\u0161","user_id":691269,"people_section":"Current Students","alias":""},{"type":"guest","display_name":"David Lindner","user_id":691245,"people_section":"Current Students","alias":""},{"type":"guest","display_name":"Mengshi Qi","user_id":691266,"people_section":"Current Students","alias":""},{"type":"guest","display_name":"Arslan Raja","user_id":691236,"people_section":"Current Students","alias":""},{"type":"guest","display_name":"Hugo  Romat","user_id":696222,"people_section":"Current Students","alias":""},{"type":"guest","display_name":"Lukas Schmid","user_id":691263,"people_section":"Current Students","alias":""},{"type":"guest","display_name":"Siwei Zhang","user_id":696514,"people_section":"Current Students","alias":""},{"type":"guest","display_name":"Simon Zimmermann","user_id":691260,"people_section":"Current Students","alias":""},{"type":"user_nicename","display_name":"Hitesh Ballani","user_id":32008,"people_section":"Current PIs","alias":"hiballan"},{"type":"guest","display_name":"Edouard Bugnion","user_id":693660,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Cesar Cadena","user_id":693714,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Arnaud Chiolero","user_id":693654,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Derek Chiou","user_id":696309,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Jen Jen Chung","user_id":693684,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Stelian Coros","user_id":693702,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Pascal Fua","user_id":693723,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Torsten Hoefler","user_id":693690,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Christian Holz","user_id":693735,"people_section":"Current PIs","alias":""},{"type":"user_nicename","display_name":"Eric Horvitz","user_id":32033,"people_section":"Current PIs","alias":"horvitz"},{"type":"guest","display_name":"Matteo Interlandi","user_id":694212,"people_section":"Current PIs","alias":""},{"type":"user_nicename","display_name":"Emre Kiciman","user_id":31739,"people_section":"Current PIs","alias":"emrek"},{"type":"guest","display_name":"Tobias Kippenberg","user_id":693609,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Marios Kogias","user_id":691242,"people_section":"Current PIs","alias":""},{"type":"user_nicename","display_name":"Andrey Kolobov","user_id":30910,"people_section":"Current PIs","alias":"akolobov"},{"type":"guest","display_name":"Andreas Krause","user_id":693666,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Nicholas Lawrance","user_id":693681,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Onur Mutlu","user_id":693747,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Juan Nieto","user_id":693720,"people_section":"Current PIs","alias":""},{"type":"user_nicename","display_name":"Marc Pollefeys","user_id":36191,"people_section":"Current PIs","alias":"mapoll"},{"type":"guest","display_name":"Roi Poranne","user_id":693708,"people_section":"Current PIs","alias":""},{"type":"user_nicename","display_name":"Dan Ports","user_id":37404,"people_section":"Current PIs","alias":"dports"},{"type":"guest","display_name":"Renato Renner","user_id":693696,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Mathieu Salzmann","user_id":693729,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Roland Siegwart","user_id":693675,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Siyu Tang","user_id":694224,"people_section":"Current PIs","alias":""},{"type":"user_nicename","display_name":"Matthias Troyer","user_id":36533,"people_section":"Current PIs","alias":"mtroyer"},{"type":"guest","display_name":"Kushagra  Vaid","user_id":696315,"people_section":"Current PIs","alias":""},{"type":"guest","display_name":"Robert West","user_id":693648,"people_section":"Current PIs","alias":""},{"type":"user_nicename","display_name":"Ryen W. White","user_id":33481,"people_section":"Current PIs","alias":"ryenw"},{"type":"guest","display_name":"Ce Zhang","user_id":693741,"people_section":"Current PIs","alias":""},{"type":"user_nicename","display_name":"Irene Zhang","user_id":37032,"people_section":"Current PIs","alias":"irzha"},{"type":"guest","display_name":"Gustavo Alonso","user_id":691281,"people_section":"Steering Committee","alias":""},{"type":"guest","display_name":"Marc Holitscher","user_id":691287,"people_section":"Steering Committee","alias":""},{"type":"guest","display_name":"James Larus","user_id":691275,"people_section":"Steering Committee","alias":""}],"related-publications":[],"related-downloads":[],"related-videos":[840733,839887,839905,839980,839992,840010,840058,840670,840679,840688,840703,840724,756076,840748,840766,840778,840790,840802,840817,840826,841054,841069,841618,841633,753571,753379,753388,753430,753439,753451,753460,753469,753478,753487,753553,753562,750730,753580,753589,753598,753607,753619,753628,753640,753655,754108,754126,755167],"related-projects":[],"related-events":[1015125,835105,816505,726349,621243,605931,560223,442047,353801],"related-opportunities":[],"related-posts":[683,364241,442788,566097,577674,636528],"tab-content":[{"id":0,"name":"Projects","content":"<h3>EPFL projects (2022-2023)<\/h3>\r\n[accordion]\r\n[panel header=\"Development of a Geometric Deep Learning Framework for Protein Modeling and Design\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> Bruno Correia (and Michael Bronstein, Imperial)<\/p>\r\n<strong>Microsoft PIs:<\/strong> Max Welling, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cmbishop\/\">Chris Bishop<\/a>\r\n\r\nProteins play a crucial role in every form of life. The function of proteins is largely determined by their 3D structure and the way they interact with other molecules. Understanding the mechanisms that govern protein structure and their interactions with other molecules is a holy grail of biology that also paves the path to ground-breaking new applications in biotechnology and medicine. Over the past three decades, large amounts of structural data on proteins has been made available to the wide-scientific community. This has created opportunities for machine learning (ML) approaches to improve our ability to better understand the governing principles of these molecules, as well as to develop computational approaches for the design of novel proteins and small molecules drugs. The three-dimensional structures of proteins and molecular objects are a natural fit for <em>Geometric Deep Learning<\/em> (GDL). In this proposal, we will develop GDL-based approaches that describe molecular entities using point clouds engraved with descriptors capturing physical features (geometry and chemistry) that will be optimized to describe different aspects of proteins. Specifically, through the aims of this grant we will attempt to: capture dynamic features of protein surfaces (Aim 1); leverage the surface descriptors to condition the generation of small-molecules to engage specific pockets (Aim 2); couple new structure prediction algorithms with surface descriptor optimization for the design of new functions in proteins (Aim 3). Towards the generative aspects of our application (designing new surfaces, small-molecules, proteins), a common problem is that the spaces to be sampled are extremely large and thus the expertise within the Microsoft Research team could be critical to reach a functional solution. Specifically, the expertise in variational autoencoders, equivariant architectures and Bayesian optimization will be of major importance. In summary, we propose a novel approach powered by cutting edge computational methods to model and design <em>de novo<\/em> proteins that globally has an enormous potential to help addressing problems in medicine and biotechnology.\r\n\r\n[\/panel][panel header=\"EPFL Smart Kitchen: Home-Based Functional Assessment Platform for Neurological Patients\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> Alexander Mathis, Friedhelm Hummel, Silvestro Micera<\/p>\r\n<strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/butekin\/\">Bugra Tekin<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>\r\n\r\nDespite many advances in neuroprosthetics and neurorehabilitation, the techniques to measure, to personalize and thus to optimize the functional improvements that patients gain with therapy are limited. Impairments remain to be assessed by standardized functional tests, which fail to capture everyday behaviour and quality of life or allow to be well used for personalization and have to be performed by trained health care professionals in the clinical environment. By leveraging recent advances in motion capture and hardware, we will create novel metrics to evaluate, personalize and improve the dexterity of patients in their everyday life. We will utilize the EPFL Smart Kitchen platform to assess naturalistic behaviour in the kitchen of both healthy subjects, upper-limb amputees and stroke patients filmed from a head mounted camera (Microsoft HoloLens). We will develop a computer vision pipeline that is capable of measuring hand-object interactions in patient\u2019s kitchens. Based on this novel, large-scale dataset collected in patient\u2019s kitchens, we will derive metrics that measure dexterity in the \u201cnatural world,\u201d as well as recovered and compensatory movements due to the pathology\/assistive device. We will also use those data, to assess novel control strategies for neuroprosthetics and design optimal, personalized rehabilitation treatment by leveraging virtual reality.\r\n\r\n[\/panel][panel header=\"Invariant Federated Learning: Decentralized Training of Robust Privacy-Preserving Models\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> Robert West, Valentin Hartmann, Maxime Peyrard<\/p>\r\n<strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/didimit\/\">Dimitrios Dimitriadis<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/emrek\/\">Emre K\u0131c\u0131man<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/rsim\/\">Robert Sim<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/shtople\/\">Shruti Tople<\/a>\r\n\r\nAs machine learning (ML) models are becoming more complex, there has been a growing interest in making use of decentrally generated data (e.g., from smartphones) and in pooling data from many actors. At the same time, however, privacy concerns about organizations collecting data have risen. As an additional challenge, decentrally generated data is often highly heterogeneous, thus breaking assumptions needed by standard ML models. Here, we propose to \u201ckill two birds with one stone\u201d by developing Invariant Federated Learning, a framework for training ML models without directly collecting data, while not only being robust to, but even benefiting from, heterogeneous data. For the problem of learning from distributed data, the Federated Learning (FL) framework has been proposed. Instead of sharing raw data, clients share model updates to help train an ML model on a central server. We combine this idea with the recently proposed Invariant Risk Minimization (IRM) approach, a solution for causal learning. IRM aims to build models that are robust to changes in the data distribution and provide better out-of-distribution (OOD) generalization by using data from different environments during training. This integrates naturally with FL, where each client may be seen as constituting its own environment. We seek to gain robustness to distributional changes and better OOD generalization, as compared to FL methods based on the standard empirical risk minimization. Previous work has further shown that causal models possess better privacy properties than associational models [26]. We will turn these theoretical insights into practical algorithms to, e.g., provide Differential Privacy guarantees for FL. The project proposed here integrates naturally with ideas pursued in the context of the Microsoft Turing Academic Program (MS-TAP), where the PI\u2019s lab is already collaborating with Microsoft (including Emre K\u0131c\u0131man, a co-author of this proposal) in order to make language models more robust via IRM.\r\n\r\n[\/panel][panel header=\"Neural Quantum States for Electronic Systems and 2D Quantum Materials\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PI:<\/strong> Giuseppe Carleo<\/p>\r\n<strong>Microsoft PIs:<\/strong> Max Welling, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cmbishop\/\">Chris Bishop<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mtroyer\/\">Matthias Troyer<\/a>\r\n\r\nThe fundamental equations governing interacting quantum-mechanical matter in solids have been known for over 90 years. However, these equations are simply \"much too complicated to be soluble\" (Paul A. M. Dirac, 1929). Besides experiments, the main source of information that we have available originates from computational methods to simulate these systems. Machine learning approaches based on artificial neural networks (NN) have recently been shown to be a new powerful tool in simulating systems governed by the laws of quantum mechanics. The leading approach in the field, pioneered by Carleo and Troyer, are known as neural quantum states, and have been successfully applied to several model quantum systems. For these typically prototypical and simplified \u2013 yet hard to solve\u2013 models of interacting quantum matter, neural quantum states have shown state-of-the-art \u2013 or better \u2013 performance. Despite this success, however, the application of neural quantum states to the ab-initio simulation of solids and materials is largely unexplored, both theoretically and computationally. Compared to the method for quantum spin systems, this requires methods that intrinsically work on continuous degrees of freedom, rather than discrete ones. Examples of important systems that can be studied with continuous space methods are crystals and several phases of matter that show a periodic lattice structure. In this project, we will introduce deep-learning-based approaches for the ab-initio simulation of solids, with a focus on imposing physical symmetries and scalability. With a powerful and efficient computational method to simulate continuous-space atomic quantum systems, we will be able to access unprecedented regimes of accuracy for the descriptions of materials, especially in two dimensions, where strong interactions are dominant.\r\n\r\n[\/panel][panel header=\"Smart Controllable Models (SCM)\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PI:<\/strong> Pascal Fua<\/p>\r\n<strong>Microsoft PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/akapoor\/\">Ashish Kapoor<\/a>\r\n\r\nWe live in a three-dimensional world full of manufactured objects of ever-increasing complexity. To be functional, they require clever engineering and design. The search for energy-efficient designs of objects, such as the windmill exemplifies the challenges and promises of such engineering: The blades must have the right shapes to harness as much energy from the wind by balancing lift and drag, and the whole assembly must be strong and light. With ever more powerful simulation techniques and the advent of digital sensors that enable precise measurements, shape engineering relies increasingly on the resulting algorithmic developments. As a result, Computer Aided Design (CAD) has become central to engineering but is not yet capable of addressing all the relevant issues simultaneously. Computer Vision and Computer Graphics are among the fields with the greatest potential for impact in CAD, especially given the remarkable progress that deep learning has fostered in these fields. For example, continuous deep implicit-fields have recently emerged as one of the most promising 3D shape-modeling approaches for objects that can be represented by a single watertight surface.\r\n\r\nHowever, current approaches to modeling complex composite objects cannot jointly account for geometric, topological, engineering constraints as well as for performance requirements. To remedy this, we will build latent models that can be used to represent and optimize complex composite shapes while strictly enforcing compatibility constraints between their components and controllability constraints on the whole. A central focus will be on developing training methods that guarantee that the output of the deep networks we train strictly obey these constraints, something that existing methods that rely on adding ad hoc loss functions cannot do. The results will be integrated into Microsoft\u2019s simulation platforms\u2014 AirSim and Bonsai \u2014with a view to rapidly building and designing real-world robots.\r\n\r\n[\/panel][panel header=\"Tyche: Confidential Computing on Yesterday\u2019s Hardware\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> Edouard Bugnion, Mathias Payer<\/p>\r\n<strong>Microsoft PIs:<\/strong> Adrien Ghosn, <a href=\"https:\/\/marioskogias.github.io\/\">Marios Kogias<\/a>\r\n\r\nConfidential computing is an increasingly popular means to wider Cloud adoption. By offering confidential virtual machines and enclaves, Cloud service providers now host organizations, such as banks and hospitals, that abide by stringent legal requirement with regards to their client\u2019s data confidentiality. These technologies foster sufficient trust to enable such clients to transition to the Cloud, while protecting themselves against a potentially compromised or malicious host. Unfortunately, confidential computing solutions depend on bleeding-edge emerging hardware that (1) takes long to roll out at the Cloud scale and (2) as a recent technology, lacks a clear consensus on both the underlying hardware mechanisms and the exposed programming model and is thus bound to frequent changes and potential security vulnerabilities. This proposal strives to explore the possibilities of building confidential systems without special hardware support. Instead, we will leverage existing commodity hardware that is already deployed in Cloud datacenters combined with new programming language and formal method techniques and identify how to provide similar or even more elaborate confidentiality and integrity guarantees than the existing confidential hardware. Achieving such a software\/hardware co-design will enable Cloud providers to deploy new Cloud products for confidential computing without waiting for neither the standardization nor the wide installation of confidential hardware. The key goal of this project is the design and implementation of a trusted, attested, and formally verified monitor acting as a trusted intermediary between resource managers, such as a Cloud hypervisor or an OS, and their clients, e.g., confidential virtual machines and applications. We plan to explore how commodity hardware features, such as hardware support for virtualization, can be leveraged in the implementation of such a solution with as little modification as possible to existing hypervisor implementations.\r\n\r\n[\/panel][\/accordion]\r\n<div style=\"height: 40px\"><\/div>\r\n<h3>ETH Zurich projects (2022-2023)<\/h3>\r\n[accordion]\r\n[panel header=\"Collaborative Human-Robot Motion Planning with Mixed Reality\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PIs:<\/strong> Roi Poranne, Stelian Coros<\/p>\r\n<strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jedelmer\/\">Jeffrey Delmerico<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/juannieto\/\">Juan Nieto<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>\r\n\r\nDespite popular depictions in sci-fi movies and TV shows, robots remain limited in their ability to autonomously solve complex tasks. Indeed, even the most advanced commercial robots are only now just starting to navigate man-made environments while performing simple pick-and-place operations. In order to enable complex high-level behaviours, such as the abstract reasoning required to manoeuvre objects in highly constrained environments, we propose to leverage human intelligence and intuition. The challenge here is one of representation and communication. In order to communicate human insights about a problem to a robot, or to communicate a robot\u2019s plans and intent to a human, it is necessary to utilize representations of space, tasks, and movements that are mutually intelligible for both human and robot. This work will focus on the problem of single and multi-robot motion planning with human guidance, where a human assists a team of robots in solving a motion-based task that is beyond the reasoning capabilities of the robot systems. We will exploit the ability of Mixed Reality (MR) technology to communicate spatial concepts between robots and humans, and will focus our research efforts on exploring the representations, optimization techniques, and multi-robot task planning necessary to advance the ability of robots to solve complex tasks with human guidance.\r\n\r\n[\/panel][panel header=\"Fundamentally Understanding and Solving RowHammer\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> Onur Mutlu<\/p>\r\n<strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ssaroiu\/\">Stefan Saroiu<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/alecw\/\">Alec Wolman<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/mark-hill-a0b9a21b4\/\">Mark Hill<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/moscitho\/\">Thomas Moscibroda<\/a>\r\n\r\nDRAM is the prevalent technology used to architect main memory across a wide range of computing platforms. Unfortunately, DRAM suffers from the RowHammer vulnerability. RowHammer is caused by repeatedly accessing (i.e., hammering) a DRAM row such that the electro-magnetic interference that develops due to the rapid DRAM row activations causing bit flips in DRAM rows that are physically nearby the hammered row. Prior research demonstrates that the RowHammer vulnerability of DRAM chips worsens as DRAM cell size and cell-to-cell spacing shrink. Numerous works demonstrate RowHammer attacks to escalate user privileges, obtain private keys, manipulate sensitive data, and destroy the accuracy of neural networks. Given that the RowHammer vulnerability of modern DRAM chips worsens and can be used to compromise a wide range of computing platforms, it is crucial to fundamentally understand and solve RowHammer to ensure secure and reliable DRAM operation. Our goal in this project is to\r\n<ol>\r\n \t<li>rigorously study the unexplored aspects of RowHammer via rigorous experiments, using hundreds of real DRAM chips, and leverage all the understanding we develop to<\/li>\r\n \t<li>experimentally analyze the security guarantees of existing RowHammer mitigation mechanisms (e.g., Tar-get Row Refresh (TRR)),<\/li>\r\n \t<li>craft more effective RowHammer access patterns, and<\/li>\r\n \t<li>design completely secure, efficient, and low-cost RowHammer mitigation mechanisms.<\/li>\r\n<\/ol>\r\n[\/panel][panel header=\"High-Fidelity MR-Me: Lightweight Capture and Photorealistic Differentiable Rendering of Personalized Neural Animatable Avatars\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> Otmar Hilliges<\/p>\r\n<strong>Microsoft PI:<\/strong> <a href=\"https:\/\/www.linkedin.com\/in\/valentinjulien\/\">Julien Valentin<\/a>\r\n\r\nDigital capture of human bodies is a rapidly growing research area in computer vision and computer graphics that puts scenarios such as life-like Mixed Reality (MR) virtual-social interactions into reach, albeit not without overcoming several challenging research problems. A core question in this respect is how to faithfully transmit a virtual copy of oneself so that a remote collaborator may perceive the interaction as immersive and engaging. To present a real alternative to face-to-face meetings, future AR\/VR systems will crucially depend on the following two core building blocks:\r\n<ol>\r\n \t<li>means to capture the 3D geometry and appearance (e.g., texture, lighting) of individuals with consumer-grade infrastructure (e.g., a single RGB-D camera) and with very little time and expertise and<\/li>\r\n \t<li>means to represent the captured geometry and appearance information in a fashion that is suitable for photorealistic rendering under fine-grained control over the underlying factors such as pose and facial expressions amongst others.<\/li>\r\n<\/ol>\r\nIn this project, we plan to develop novel methods to learn animatable representations of humans from \u2018cheap\u2019 data sources alone. Furthermore, we plan to extend our own recent work on animatable neural implicit surfaces, such that it can represent not only the geometry but also the appearance of subjects in high visual fidelity. Finally, we plan to study techniques to enforce geometric and temporal consistency in such methods to make them suitable for MR and other telepresence downstream applications.\r\n\r\n[\/panel][panel header=\"Intelligent Software Adaptation to Improve Wellbeing and Optimize Cognition using Ubiquitous Non-Contact Physiological and Behavioural Sensing\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> Christian Holz<\/p>\r\n<strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/damcduff\/\">Daniel McDuff<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/tadas-baltrusaitis-234b1234\/\">Tadas Baltrusaitis<\/a>\r\n\r\nThe <em>passive<\/em> measurement of cognitive stress and its impact on performance in cognitive tasks has a huge potential for human-computer interaction (HCI) and affective computing, including workload optimization or \u201cflow\u201d understanding for future of work productivity scenarios, remote learning, automated tutor systems, as well as stress monitoring, mental health, and telehealth applications more generally. When cognitive demands exceed resources, people experience stress and task performance degrades. In this project, we will develop intelligent software experiences that reduce workers\u2019 stress and optimize their cognitive resources. We will develop sensing models that capture the body\u2019s autonomic nervous system (\u201cfight or flight\u201d) responses to cognitive demands in real-time using information from multiple physiologic processes. These inputs will then help drive AI support that adapts to provide cognitive support while also maintaining autonomy (e.g., avoiding unnecessary and annoying interventions.) Specifically, we will develop novel computer vision and signal processing approaches for measuring cardiovascular, respiratory, pupil\/ocular, and dermal changes using ubiquitous sensors. For desktop environments, we will develop, evaluate, and demonstrate our methods using non-contact sensing (the webcams built into PCs). For head-mounted displays, we will appropriate our methods to utilize signals originating from the wearer\u2019s head using built-in headset sensors. In both cases, our developments will produce novel datasets, computational methods, and the results of in-situ evaluations in productivity scenarios. Using our novel methods, we will also investigate their implications for telehealth scenarios, which often contain cardiovascular and respiratory assessments. We will develop scenarios that guide the user while assessing these metrics and visually present the remote physician with the results for examination.\r\n\r\n[\/panel][panel header=\"Learning Physics-Based Optimal Inference of Flow Tensor MRI\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> Sebastian Kozerke<\/p>\r\n<strong>Microsoft PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mihansen\/\">Michael Hansen<\/a>\r\n\r\nCardiovascular Magnetic Resonance Imaging (MRI) has become a key imaging modality to diagnose, monitor and stratify patients suffering from a wide range of cardiovascular diseases. Using Flow MRI, time-resolved blood flow patterns can be quantified throughout the circulatory system providing information on the interplay of anatomical and hemodynamic conditions in health and disease.\r\n\r\nToday, inference of Flow MRI data is based on data post-processing, which includes massive data reduction to yield metrics such as mean and peak flow, kinetic energy, and wall shear rates. In consequence of the data reduction step, however, the wealth of information encoded in the data including fundamental causal relations are potentially missed. In addition, the dependency of the metrics on parameters of the measurement and image reconstruction process itself compromises the diagnostic yield and the reproducibility of the method, hence hampering further dissemination.\r\n\r\nHere we propose to develop and implement a computational framework for Flow Tensor MRI data synthesis to train physics-based neural networks for image reconstruction and inference of the complex interplay of anatomy, coherent and incoherent flows in the aorta in-vivo. Using cloud-based, scalable computing resources, we will demonstrate that synthetically trained reconstruction and inference machines permit high-speed image reconstruction and inference to unravel complex structure-function relations using real-world in-vivo Flow Tensor MRI by exploiting the entirety of information contained in the data along with the information of the measurement process itself.\r\n\r\n[\/panel][panel header=\"Mixed Reality for Shared Autonomy\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PIs:<\/strong> Marco Tognon, Mike Allenspach, Nicholas Lawrence, Roland Siegwart<\/p>\r\n<strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jedelmer\/\">Jeffrey Delmerico<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/juannieto\/\">Juan Nieto<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>\r\n\r\nOur objective is to exploit recent developments in MR to enhance human capabilities with robotic assistance. Robots offer mobility and power but are not capable of performing complex tasks in challenging environments such as construction, contact-based inspection, cleaning, and maintenance. On the other hand, humans have excellent higher-order reasoning, and skilled workers have the experience and training to adapt to new circumstances quickly and effectively. However, they lack in mobility and power. We envision to reduce this limitation by empowering human operators with the assistance and the capabilities provided by a robot system. This requires a human-robot interface that fully leverages the capabilities of both the human operator and the robot system. In this project we aim to explore the problem of shared autonomy for physical interaction tasks in shared physical workspaces. We will explore how an operator can effectively command a robot system using a MR interface over a range of autonomy levels from low-level direct teleoperation to high-level task specification. We will develop methods for estimating the intent and comfort level of an operator to provide an intuitive and effective interface. Finally, we will explore how to pass information from the robot system back to the human operator for effective understanding of the robot\u2019s plans. We will prove the value of mixed reality interfaces by enhancing human capabilities with robot systems through effective, bilateral communication for a wide variety of complex tasks.\r\n\r\n[\/panel][panel header=\"The Last Bug: Empowering Electronic Design Automation to Improve Microarchitectural Security\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> Kaveh Razavi<\/p>\r\n<strong>Microsoft PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bokoepf\/\">Boris K\u00f6pf<\/a>\r\n\r\nThere is currently a large gap between the capabilities of Electronic Design Automation (EDA) tools and what is required to detect various classes of microarchitectural vulnerabilities pre-silicon. This project aims to bridge this gap by leveraging recent advances in software testing to produce the necessary knowledge and tools for effective hardware testing. Our driving hypothesis is that if we could provide crucial information about the privilege and domain of instructions and\/or data in the microarchitecture during simulation or emulation, then we can easily detect many classes of microarchitectural vulnerabilities. As an example, with the right test cases, we could detect Meltdown-type vulnerabilities since seemingly different variants all require an instruction that can access data from a different privilege domain.\r\n\r\n[\/panel][panel header=\"Understanding the Security of Probabilistic Data Structures Under Adversarial Conditions\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> Kenneth G. Paterson<\/p>\r\n<strong>Microsoft PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bal\/\">Brian A. LaMacchia<\/a>\r\n\r\nProbabilistic data structures (PDS) are becoming extremely widely used in practice in the era of \u201cbig data\u201d. They are used to process large data sets, often in a streaming setting, and to provide approximate answers to basic data exploration questions such as \u201cHas a particular bit-string in this data stream been encountered before?\u201d or \u201cHow many distinct bit-strings are there in this data set?\u201d. They are increasingly supported in systems like Microsoft Azure Data Explorer, Google Big Query, Apache Spark, Presto and Redis, and there is an active research community working on PDS within computer science. Generally, PDS are designed to perform well \u201cin the average case\u201d, where the inputs are selected independently at random from some distribution. This we refer to as the non-adversarial setting. However, they are increasingly being used in adversarial settings, where the inputs can be chosen by an adversary interested in causing the PDS to perform badly in some way, e.g. creating many false positives for a Bloom filter, or underestimating the set cardinality for a cardinality estimator. In recent work, we performed an in-depth analysis of the HyperLogLog (HLL) PDS and its security under adversarial input. The proposed research will extend our prior work in three directions:\r\n<ol>\r\n \t<li>address the mergeability problem for HLL;<\/li>\r\n \t<li>extend our simulation-based framework for studying the correctness and security of HLL to other PDS in adversarial settings;<\/li>\r\n \t<li>study the specific case of cascaded Bloom filters, which have been proposed for use in CRLite, a privacy-preserving system for managing certificate revocation for the webPKI.<\/li>\r\n<\/ol>\r\n[\/panel][\/accordion]\r\n<div style=\"height: 40px\"><\/div>\r\n<h3>EPFL projects (2019-2021)<\/h3>\r\n[accordion]\r\n[panel header=\"Hands in Contact for Augmented Reality\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> <a href=\"https:\/\/people.epfl.ch\/pascal.fua\" target=\"_blank\" rel=\"noopener\">Pascal Fua<\/a>, <a href=\"https:\/\/people.epfl.ch\/Mathieu.Salzmann\" target=\"_blank\" rel=\"noopener\">Mathieu Salzmann<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/butekin\/\">Bugra Tekin<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta Sinha<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><\/p>\r\n<strong>PhD Student: <\/strong><a href=\"https:\/\/www.linkedin.com\/in\/mengshi-qi-684abb97\/\">Mengshi Qi<\/a>\r\n\r\nIn recent years, there has been tremendous progress in camera-based 6D object pose, hand pose and human 3D pose estimation. They can now both be done in real time but not yet to the level of accuracy required to properly capture how people interact with each other and with objects, which is a crucial component of modeling the world in which we live. For example, when someone grasps an object, types on a keyboard, or shakes someone else\u2019s hand, the position of their fingers with respect to what they are interacting with must be precisely recovered for the resulting models to be used by AR devices, such as the HoloLens device or consumer-level video see-through AR ones. This remains a challenge, especially given the fact that hands are often severely occluded in the egocentric views that are the norm in AR. We will, therefore, work on accurately capturing the interaction between hands and objects they touch and manipulate. At the heart of it, will be the precise modeling of contact points and the resulting physical forces between interacting hands and objects. This is essential for two reasons. First, objects in contact exert forces on each other; their pose and motion can only be accurately captured and understood if reaction forces at contact points and areas are modeled jointly. Second, touch and touch-force devices, such as keyboards and touch-screens are the most common human-computer interfaces, and by sensing contact and contact forces purely visually, every-day objects could be turned into tangible interfaces, that react as if they were equipped with touch-sensitive electronics. For instance, a soft cushion could become a non-intrusive input device that, unlike virtual mid-air menus, provides natural force feedback. In this talk, I will present some of our preliminary results and discuss our research agenda for the year to come.\r\n[\/panel]\r\n[panel header=\"Monitoring, Modelling, and Modifying Dietary Habits and Nutrition Based on Large-Scale Digital Traces\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> <a href=\"https:\/\/people.epfl.ch\/robert.west\" target=\"_blank\" rel=\"noopener\">Robert West<\/a>, <a href=\"https:\/\/www3.unifr.ch\/med\/de\/section\/staff\/prof\/people\/299312\/06a29\" target=\"_blank\" rel=\"noopener\">Arnaud Chiolero<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ryenw\/\">Ryen White<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/horvitz\/\">Eric Horvitz<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/emrek\/\">Emre Kiciman<\/a><\/p>\r\n<strong>PhD Student: <\/strong><a href=\"https:\/\/www.linkedin.com\/in\/kristinagligoric\/\">Kristina Gligoric<\/a>\r\n\r\nThe overall goal of this project is to develop methods for monitoring, modeling, and modifying dietary habits and nutrition based on large-scale digital traces. We will leverage data from both EPFL and Microsoft, to shed light on dietary habits from different angles and at different scales: Our team has access to logs of food purchases made on the EPFL campus with the badges carried by all EPFL members. Via the Microsoft collaborators involved, we have access to Web usage logs from IE\/Edge and Bing, and via MSR\u2019s subscription to the Twitter firehose, we gain full access to a major social media platform. Our agenda broadly decomposes into three sets of research questions: (1) Monitoring and modeling: How to mine digital traces for spatiotemporal variation of dietary habits? What nutritional patterns emerge? And how do they relate to, and expand, the current state of research in nutrition? (2) Quantifying and correcting biases: The log data does not directly capture food consumption, but provides indirect proxies; these are likely to be affected by data biases, and correcting for those biases will be an integral part of this project. (3) Modifying dietary habits: Our lab is co-organizing an annual EPFL-wide event called the Act4Change challenge, whose goal is to foster healthy and sustainable habits on the EPFL campus. Our close involvement with Act4Change will allow us to validate our methods and findings on the ground via surveys and A\/B tests. Applications of our work will include new methods for conducting population nutrition monitoring, recommending better-personalized eating practices, optimizing food offerings, and minimizing food waste.\r\n[\/panel]\r\n[panel header=\"Photonic Integrated Multi-Wavelength Sources for Data Centers\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PI:<\/strong> <a href=\"https:\/\/people.epfl.ch\/tobias.kippenberg\" target=\"_blank\" rel=\"noopener\">Tobias J. Kippenberg<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hiballan\/\">Hitesh Ballani<\/a><\/p>\r\n<strong>PhD Student: <\/strong><a href=\"https:\/\/www.linkedin.com\/in\/arslansajid1\/\">Arslan Raja<\/a>\r\n\r\nThe substantial increase in optical data transmission, and cloud computing, has fueled research into new technologies that can increase communication capacity. Optical communication through fiber, which traditionally has been used for long haul fiber optical communication, is now also employed for short haul communication, even with data-centers. In a similar vein, the increasing capacity crunch in optical fibers, driven in particular by video streaming, can only be met by two degrees of freedom: spatial and wavelength division multiplexing. Spatial multiplexing refers to the use of optical fibers that have multiple cores, allowing to transmit the same carrier wavelength in multiple fibers. Wavelength division multiplexing (WDM or dense-DWM) refers to the use of multiple optical carriers on the same fiber. A key advantage of WDM is the ability to increase line-rates on existing legacy network, without requirements to change existing SMF28 single mode fibers. WDM is also expected to be employed in data-centers. Yet to date, WDM implementation within datacenters faces a key challenge: a CMOS compatible, power efficient source of multi-wavelengths. Currently employed existing solutions, such as multi-laser chips based on InP (as developed by Infinera) cannot be readily scaled to a larger number of carriers. As a result, the currently prevalently employed solution is to use a bank of multiple, individual laser modules. This approach is not viable for datacenters due to space and power constraints. Over the past years, a new technology has rapidly matured - that was developed by EPFL \u2013 microresonator frequency combs, or microcombs that satisfy these requirements. The potential of this new technology in telecommunications has recently been demonstrated with the use of microcombs for massively coherent parallel communication on the receiver and transmitter side. Yet to date the use of such micro-combs in data-centers has not been addressed.\r\n<ol>\r\n \t<li>Kippenberg, T. J., Gaeta, A. L., Lipson, M. &amp; Gorodetsky, M. L. Dissipative Kerr solitons in optical microresonators. Science 361, eaan8083 (2018).<\/li>\r\n \t<li>Brasch, V. et al. Photonic chip\u2013based optical frequency comb using soliton Cherenkov radiation. Science aad4811 (2015). doi:10.1126\/science.aad4811<\/li>\r\n \t<li>Marin-Palomo, P. et al. Microresonator-based solitons for massively parallel coherent optical communications. Nature 546, 274\u2013279 (2017).<\/li>\r\n \t<li>Trocha, P. et al. Ultrafast optical ranging using microresonator soliton frequency combs. Science 359, 887\u2013891 (2018).<\/li>\r\n<\/ol>\r\n[\/panel]\r\n[panel header=\"TTL-MSR Taiming Tail-Latency for Microsecond-scale RPCs\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> <a href=\"https:\/\/people.epfl.ch\/edouard.bugnion\" target=\"_blank\" rel=\"noopener\">Edouard Bugnion<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/irzha\/\">Irene Zhang,<\/a> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dports\/\">Dan Ports, <\/a><a href=\"https:\/\/www.linkedin.com\/in\/marios-kogias-41815333\/?originalSubdomain=uk\">Marios Kogias<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>PhD Student: <\/strong><a href=\"https:\/\/people.epfl.ch\/konstantinos.prasopoulos\/?lang=en\">Konstantinos Prasopoulos<\/a><\/p>\r\nThe deployment of a web-scale application within a datacenter can comprise of hundreds of software components, deployed on thousands of servers organized in multiple tiers and interconnected by commodity Ethernet switches. These versatile components communicate with each other via Remote Procedure Calls (RPCs) with the cost of an individual RPC service typically measured in microseconds. The end-user performance, availability and overall efficiency of the entire system are largely dependent on the efficient delivery and scheduling of these RPCs. Yet, these RPCs are ubiquitously deployed today on top of general-purpose transport protocols such as TCP. We propose to make RPC first-class citizens of datacenter deployment. This requires a revisitation of the overall architecture, application API, and network protocols. Our research direction is based on a novel RPC-oriented protocol, R2P2, which separates control flow from data flow and provides in-networking scheduling opportunities to tame tail latency. We are also building the tools that are necessary to scientifically evaluate microsesecond-scale services.\r\n[\/panel]\r\n[\/accordion]\r\n<div style=\"height: 40px\"><\/div>\r\n<h3>ETH Zurich projects (2019-2021)<\/h3>\r\n[accordion]\r\n[panel header=\"A Modular Approach for Lifelong Mapping from End-User Data\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PIs:<\/strong> <a href=\"https:\/\/asl.ethz.ch\/the-lab\/people\/person-detail.html?persid=29981\" target=\"_blank\" rel=\"noopener\">Roland Siegwart<\/a>, <a href=\"https:\/\/n.ethz.ch\/~cesarc\/\" target=\"_blank\" rel=\"noopener\">Cesar Cadena<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes Sch\u00f6nberger<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><\/p>\r\n<strong>PhD Student: <\/strong><a href=\"https:\/\/www.linkedin.com\/in\/lukas-schmid-9b9711179\/\">Lukas Schmid<\/a>\r\n\r\nAR\/VR allow new and innovative ways of visualizing information and provide a very intuitive interface for interaction. At their core, they rely only on a camera and inertial measurement unit (IMU) setup or a stereo-vision setup to provide the necessary data, either of which are readily available on most commercial mobile devices. Early adoptions of this technology have already been deployed in the real estate business, sports, gaming, retail, tourism, transportation and many other fields. The current technologies in visual-aided motion estimation and mapping on mobile devices have three main requirements to produce highly accurate 3D metric reconstructions: An accurate spatial and temporal calibration of the sensor suite, a procedure which is typically carried out with the help of external infrastructure, like calibration markers, and by following a set of predefined movements. Well-lit, textured environments and feature-rich, smooth trajectories. The continuous and reliable operation of all sensors involved. This project aims at relaxing these requirements, to enable continuous and robust lifelong mapping on end-user mobile devices. Thus, the specific objectives of this work are: 1. Formalize a modular and adaptable multi-modal sensor fusion framework for online map generation; 2. Improve the robustness of mapping and motion estimation by exploiting high-level semantic features; 3. Develop techniques for automatic detection and execution of sensor calibration in the wild. A modular SLAM (simultaneous localization and mapping) pipeline which is able to exploit all available sensing modalities can overcome the individual limitations of each sensor and increase the overall robustness of the estimation. Such an information-rich map representation allows us to leverage recent advances in semantic scene understanding, providing an abstraction from low-level geometric features - which are fragile to noise, sensing conditions and small changes in the environment - to higher-level semantic features that are robust against these effects. Using this complete map representation, we will explore new ways to detect miscalibrations and sensor failures, so that the SLAM process can be adapted online without the need for explicit user intervention.\r\n[\/panel]\r\n[panel header=\"Automatic Recipe Generation for ML.NET Pipelines\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> <a href=\"https:\/\/inf.ethz.ch\/people\/people-atoz\/person-detail.zhang.html\" target=\"_blank\" rel=\"noopener\">Ce Zhang<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong> <a href=\"https:\/\/interesaaat.github.io\/\" target=\"_blank\" rel=\"noopener\">Matteo Interlandi<\/a><\/p>\r\n<strong>PhD Student: <\/strong><a href=\"https:\/\/www.linkedin.com\/in\/bojankarlas\/\">Bojan Karla\u0161<\/a>\r\n\r\nThe goal of this project is to mine ML.NET historical data such as user telemetry and logs to understand how ML.NET transformations and learners are used and eventually being able to use this knowledge to automatically provide suggestions to data scientists using ML.NET.\r\nSuggestions can be in the form of: Better or additional recipes for unexplored tasks (e.g., neural networks). Auto-completion suggestions for pipelines directly authored for example in .NET or Python.\r\nAutomatically generation of parameters and sweep strategies optimal for the task at hand. We will try to develop a solution that is extensible such that, if new tasks, algorithms, etc. are added to the library, suggestions will be eventually properly upgraded as well. Additionally, the tool will have to interface with ML.NET and make easy to add new recipes coming either from users or the log mining tool.\r\n\r\n[\/panel]\r\n[panel header=\"Interaction Capture for Mixed Reality\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> <a href=\"https:\/\/inf.ethz.ch\/people\/person-detail.MjYyNzgw.TGlzdC8zMDQsLTg3NDc3NjI0MQ==.html\" target=\"_blank\" rel=\"noopener\">Siyu Tang<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<\/a><\/p>\r\n<strong>PhD Student:\u00a0<\/strong><a href=\"https:\/\/inf.ethz.ch\/people\/people-atoz\/person-detail.MjQyOTY2.TGlzdC8zMDQsLTIxNDE4MTU0NjA=.html\">Siwei Zhang<\/a>\r\n\r\nHumans are social beings and frequently interacting with one another, e.g. spending a large amount of their time being socially engaged, working in teams, or just being as part of the crowd. Understanding human interaction from visual input is an important aspect of visual cognition and key to many applications including assistive robotics, human-computer interaction and AR\/VR. Despite rapid progresses in estimating 3D pose and shape of a single person from RGB images, capturing and modelling human interactions is rather poorly studied in the literature. Particularly for the first-person-view settings, the problem has drawn little attention from the computer vision community. We argue that it is essential for the augmented reality glasses, e.g. Microsoft HoloLens, to capture and model the interactions between the camera wearer and others as the interaction between humans characterises how they move, behave and perform tasks in a collaborative setting.\r\n\r\nIn this project, we aim to understand how to recognise and predict the interactions between humans under the first-person view setting. To that end, we will create a 3D human-human interaction dataset where the goal is to capture rich and complex interaction signals including body and hand poses, facial expression and gaze directions using Microsoft Kinect and HoloLens. We will develop models that can recognise the dynamics of human interactions and even predict the motion and activities of the interacting humans. We believe such models will facilitate various down-streaming applications for the augmented reality glasses, e.g. Microsoft HoloLens.\r\n[\/panel]\r\n[panel header=\"Project Altair: Infrared Vision and AI Decision-Making for Longer Drone Flights\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PIs:<\/strong> <a href=\"https:\/\/asl.ethz.ch\/the-lab\/people\/person-detail.html?persid=29981\" target=\"_blank\" rel=\"noopener\">Roland Siegwart<\/a>, <a href=\"https:\/\/asl.ethz.ch\/the-lab\/people\/person-detail.html?persid=244173\" target=\"_blank\" rel=\"noopener\">Nicholas Lawrance<\/a>, <a href=\"https:\/\/asl.ethz.ch\/the-lab\/people\/person-detail.MjQ0MTg0.TGlzdC8xNTg0LDEyMDExMzk5Mjg=.html\" target=\"_blank\" rel=\"noopener\">Jen Jen Chung<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/akolobov\/\">Andrey Kolobov<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dedey\/\">Debadeepta Dey<\/a><\/p>\r\n<strong>PhD Student: <\/strong><a href=\"https:\/\/asl.ethz.ch\/the-lab\/people\/person-detail.MTgwNDE5.TGlzdC8yMDMwLDEyMDExMzk5Mjg=.html\" target=\"_blank\" rel=\"noopener\">Florian Achermann<\/a>\r\n\r\nA major factor restricting the utility of UAVs is the amount of energy aboard, which limits the duration of their flights. Birds face largely the same problem, but they are adept at using their vision to aid in spotting -- and exploiting -- opportunities for extracting extra energy from the air around them. Project Altair aims at developing infrared (IR) sensing techniques for detecting, mapping and exploiting naturally occurring atmospheric phenomena called thermals for extending the flight endurance of fixed-wing UAVs. In this presentation, we will introduce our vision and goals for this project.\r\n[\/panel]\r\n[panel header=\"QIRO - A Quantum Intermediate Representation for Program Optimisation\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PIs:<\/strong> <a href=\"https:\/\/inf.ethz.ch\/people\/people-atoz\/person-detail.hoefler.html\" target=\"_blank\" rel=\"noopener\">Torsten Hoefler<\/a>, <a href=\"https:\/\/itp.phys.ethz.ch\/people\/person-detail.html?persid=59275\" target=\"_blank\" rel=\"noopener\">Renato Renner<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mtroyer\/\">Matthias Troyer<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/martinro\/\">Martin Roetteler<\/a><\/p>\r\n<strong>PhD Student: <\/strong><a href=\"https:\/\/inf.ethz.ch\/people\/person-detail.MjEyMDA3.TGlzdC8zMDQsLTg3NDc3NjI0MQ==.html\" target=\"_blank\" rel=\"noopener\">Niels Gleinig<\/a>\r\n\r\nQIRO will establish a new internal representation for compilation systems on quantum computers. Since quantum computation is still emerging, I will provide an introduction to the general concepts of quantum computation and a brief discussion of its strengths and weaknesses from a high-performance computing perspective. This talk is tailored for a computer science audience with basic (popular-science) or no background in quantum mechanics and will focus on the computational aspects. I will also discuss systems aspects of quantum computers and how to map quantum algorithms to their high-level architecture. I will close with the principles of practical implementation of quantum computers and outline the project.\r\n[\/panel]\r\n[panel header=\"Scalable Active Reward Learning for Reinforcement Learning\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> <a href=\"https:\/\/inf.ethz.ch\/people\/person-detail.krause.html\" target=\"_blank\" rel=\"noopener\">Andreas Krause<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kahofman\/\">Katja Hofmann<\/a><\/p>\r\n<strong>PhD Student: <\/strong><a href=\"https:\/\/www.linkedin.com\/in\/davnlin\/\">David Lindner<\/a>\r\n\r\nReinforcement learning (RL) is a promising paradigm in machine learning and gained considerable attention in recent years, partly because of its successful application in previously unsolved challenging games like Go and Atari. While these are impressive results, applying reinforcement learning in most other domains, e.g. virtual personal assistants, self-driving cars or robotics, remains challenging. One key reason for this is the difficulty of specifying the reward function a reinforcement learning agent is intended to optimize. For instance, in a virtual personal assistant, the reward function might correspond to the user\u2019s satisfaction with the assistant\u2019s behavior and is difficult to specify as a function of observations (e.g. sensory information) available to the system. In such applications, an alternative to specifying the reward function is to actually query the user for the reward. This, however, is only feasible if the number of queries to the user are limited and the user\u2019s response can be provided in a natural way such that the system\u2019s queries are non-irritating. Similar problems arise in other application domains such as robotics in which, for instance, the true reward can only be obtained by actually deploying the robot but an approximation to the reward can be computed by a simulator. In this case, it is important to optimize the agent\u2019s behavior while simultaneously minimizing the number of costly deployments. This project\u2019s aim is to develop algorithms for these types of problems via scalable active reward learning for reinforcement learning. The project\u2019s focus is on scalability in terms of computational complexity (to scale to large real-world problems) and sample complexity (to minimize the number of costly queries).\r\n[\/panel]\r\n[panel header=\"Skilled Assistive-Care Robots through Immersive Mixed-Reality Telemanipulation\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PIs:<\/strong> <a href=\"http:\/\/crl.ethz.ch\/people\/coros\/index.html\" target=\"_blank\" rel=\"noopener\">Stelian Coros<\/a>, <a href=\"https:\/\/people.inf.ethz.ch\/poranner\/\" target=\"_blank\" rel=\"noopener\">Roi Poranne<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/butekin\/\">Bugra Tekin<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><\/p>\r\n<strong>PhD Students:<\/strong> <a href=\"https:\/\/www.linkedin.com\/in\/simon-zimmermann-63621b99\/\">Simon Zimmermann<\/a>\r\n\r\nWith this project, we aim to accelerate the development of intelligent robots that can assist those in need with a variety of everyday tasks. People suffering from physical impairments, for example, often need help dressing or brushing their own hair. Skilled robotic assistants would allow these persons to live an independent lifestyle. Even such seemingly simple tasks, however, require complex manipulation of physical objects, advanced motion planning capabilities, as well as close interactions with human subjects. We believe the key to robots being able to undertake such societally important functions is learning from demonstration. The fundamental research question is, therefore, how can we enable human operators to seamlessly teach a robot how to perform complex tasks? The answer, we argue, lies in immersive telemanipulation. More specifically, we are inspired by the vision of James Cameron\u2019s Avatar, where humans are endowed with alternative embodiments. In such a setting, the human\u2019s intent must be seamlessly mapped to the motions of a robot as the human operator becomes completely immersed in the environment the robot operates in. To achieve this ambitious vision, many technologies must come together: mixed reality as the medium for robot-human communication, perception and action recognition to detect the intent of both the human operator and the human patient, motion retargeting techniques to map the actions of the human to the robot\u2019s motions, and physics-based models to enable the robot to predict and understand the implications of its actions.\r\n[\/panel]\r\n[panel header=\"The \u2018Seen but Unnoticed\u2019 Vocabulary of Natural Touch: Revolutionizing Device Interaction via Embodied Sensing\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> <a href=\"https:\/\/inf.ethz.ch\/people\/people-atoz\/person-detail.holz.html\" target=\"_blank\" rel=\"noopener\">Christian Holz<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong>\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kenh\/\">Ken Hinckley<\/a><\/p>\r\n<strong>PhD Student: <\/strong><a href=\"https:\/\/www.linkedin.com\/in\/hugo-romat-3a64b188\/\" target=\"_blank\" rel=\"noopener\">Hugo Romat<\/a>\r\n\r\nOver the past dozen years, touch input - seemingly well-understood - has become the predominate means of interacting with devices such as smartphones, tablets, and large displays. Yet we argue that much remains unknown - in the form of a seen but unnoticed vocabulary of natural touch - that suggests tremendous untapped potential. For example, touchscreens remain largely ignorant of the human activity, manual behavior, and context-of-use beyond the moment of finger-contact with the screen itself. In a sense, status quo interactions are trapped in a flatland of touch, while systems remain oblivious to the vibrant world of human behavior, activity, and movement that surrounds them.We posit that an entire vocabulary of naturally-occurring gestures - both in terms of the activity of the hands, as well as the subtle corresponding motion and compensatory movements of the devices themselves - exists in plain sight.Our intended outcome is creating a conceptual understanding as well as a deployable interactive system, both of which blend the naturally-occurring gestures - interactions users embody through their actions - with the explicit input through traditional touch operation.\r\n\r\n[\/panel]\r\n[panel header=\"Tiered NVM Designs, Software-NVM Interfaces, and Isolation Support\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> <a href=\"https:\/\/ee.ethz.ch\/the-department\/people-a-z\/person-detail.MjIyOTQ5.TGlzdC8zMjc5LC0xNjUwNTg5ODIw.html\" target=\"_blank\" rel=\"noopener\">Onur Mutlu<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.linkedin.com\/in\/kvaid\/\">Kushagra Vaid<\/a><a href=\"https:\/\/www.linkedin.com\/in\/terry-grunzke-b235a95\/\" target=\"_blank\" rel=\"noopener\">, Terry Grunzke, <\/a><a href=\"https:\/\/www.linkedin.com\/in\/derek-chiou-58381\/\">Derek Chiou<\/a><\/p>\r\n<strong>PhD Student: <\/strong><a href=\"https:\/\/www.linkedin.com\/in\/loisorosa\/\" target=\"_blank\" rel=\"noopener\">Lois Orosa<\/a>\r\n\r\nThis project examines the architecture and management of next-generation data center storage devices within the context of realistic data-intensive workloads. The aim is to investigate novel techniques that can greatly improve performance, cost, and efficiency in real world systems with real world applications, breaking the barriers between the applications and devices, such that the software can much more effectively and efficiently manage the underlying storage devices that consist of (potentially different types of) flash memory, emerging SCM (storage class memory) technologies, and (potentially different types of) DRAM memories. We realize that there is a disconnect in the communication between applications\/software and the NVM devices: the interfaces and designs we currently have enable little communication of useful information from the application\/software level (including the kernel) to the NVM devices, and vice versa, causing significant performance and efficiency loss and likely fueling higher \"managed\" storage device costs because applications cannot even communicate their requirements to the devices. We aim to fundamentally examine the software-NVM interfaces as well as designs for the underlying storage devices to minimize the disconnect in communication and empower applications and system software to more effectively manage the underlying devices, optimizing important system-level metrics that are of interest to the system designer or the application (at different points in time of execution).\r\n\r\n[\/panel][\/accordion]\r\n<div style=\"height: 40px\"><\/div>\r\n<h3>EPFL projects (2017-2018)<\/h3>\r\n[accordion]\r\n[panel header=\"Coltrain: Co-located Deep Learning Training and Inference\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> Babak Falsafi, Martin Jaggi<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft Co-PI:<\/strong> Eric Chung<\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nDeep Neural Networks (DNNs) have emerged as algorithms of choice for many prominent machine learning tasks, including image analysis and speech recognition. In datacenters, DNNs are trained on massive datasets to improve prediction accuracy. While the computational demands for performing online inference in an already trained DNN can be furnished by commodity servers, training DNNs often requires computational density that is orders of magnitude higher than that provided by modern servers. As such, operators often use dedicated clusters of GPUs for training DNNs. Unfortunately, dedicated GPU clusters introduce significant additional acquisition costs, break the continuity and homogeneity of datacenters, and are inherently not scalable. FPGAs are appearing in server nodes either as daughter cards (e.g., Catapult) or coherent sockets (e.g., Intel HARP) providing a great opportunity to co-locate inference and training on the same platform. While these designs enable natural continuity for platforms, co-locating inference and training on a single node faces a number of key challenges. First, FPGAs inherently suffer from low computational density. Second, conventional training algorithms do not scale due to inherent high communication requirements. Finally, co-location may lead to contention requiring mechanisms to prioritize inference over training. In this project, we will address these fundamental challenges in DNN inference\/training co-location on servers with integrated FPGAs. Our goals are:\r\n<ul>\r\n \t<li>Redesign training and inference algorithms to take advantage of DNNs inherent tolerance for low precision operations.<\/li>\r\n \t<li>Identify good candidates for hard-logic blocks for the next generations of FPGAs.<\/li>\r\n \t<li>Redesign DNN training algorithms to aggressively approximate and compress intermediate results, to target communication bottlenecks and scale the training of single networks to an arbitrary number of nodes.<\/li>\r\n \t<li>Implement FPGA-based load balancing techniques in order to provide latency guarantees for inference tasks under heavy loads and enable the use of idle accelerator cycles to train networks when operating under lower loads.<\/li>\r\n<\/ul>\r\n[\/panel]\r\n[panel header=\"Fast and Accurate Algorithms for Clustering\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> Michael Kapralov, Ola Svensson<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft Co-PIs:<\/strong> Yuval Peres, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nikdev\/\">Nikhil Devanur<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sebubeck\/\">Sebastien Bubeck<\/a><\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nThe task of grouping data according to similarity is a basic computational task with numerous applications. The right notion of similarity often depends on the application and different measures yield different algorithmic problems. The goal of this project is to design faster and more accurate algorithms for fundamental clustering problems such as the k-means problem, correlation clustering and hierarchical clustering. We propose to perform a fine grained study of these problems and design algorithms that achieve optimal trade-offs between approximation quality, runtime and space\/communication complexity, making our algorithms well-suited for modern data models such as streaming and MapReduce.\r\n[\/panel]\r\n[panel header=\"From Companion Drones to Personal Trainers\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> <a href=\"https:\/\/people.epfl.ch\/pascal.fua\" target=\"_blank\" rel=\"noopener\">Pascal Fua<\/a>, <a href=\"https:\/\/people.epfl.ch\/Mathieu.Salzmann\" target=\"_blank\" rel=\"noopener\">Mathieu Salzmann<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft Co-PIs:<\/strong><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dedey\/\"> Debadeepta Dey<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/akapoor\/\">Ashish Kapoor<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta Sinha<\/a><\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nSeveral companies are now launching drones that autonomously follow and film their owners, often by tracking a GPS device they are carrying. This holds the promise to fundamentally change the way in which drones are used by allowing them to bring back videos of their owners performing activities, such as playing sports, unimpeded by the need to control the drone. In this project, we propose to go one step further and turn the drone into a personal trainer that will not only film but also analyse the video sequences and provide advice on how to improve performance. For example, a golfer could be followed by such a drone that will detect when he swings and offer advice on how to improve the motion. Similarly, a skier coming down a slope could be given advice on how to better turn and carve. In short, the drone would replace the GoPro-style action cameras that many people now carry when exercising. Instead of recording what they see, it would film them and augment the resulting sequences with useful advice. To make this solution as lightweight as possible, we will strive to achieve this goal using the on-board camera as the sole sensor and free the user from the need to carry a special device that the drone locks onto. This will require:\r\n<ul>\r\n \t<li>Detecting the subject in the video sequences acquired by the drone so as to keep him in the middle of its field of view. This must be done in real-time and integrated into the drone\u2019s control system.<\/li>\r\n \t<li>Recovering the subject\u2019s 3D pose as he moves from the drone\u2019s videos. This can be done with a slight delay since the critique only has to be provided once the motion has been performed.<\/li>\r\n \t<li>Providing feedback. In both the golf and ski cases, this would mean quantifying leg, hips, shoulders, and head position during a swing or a turn, offering practical suggestions on how to change them, and showing how an expert would have performed the same action.<\/li>\r\n<\/ul>\r\n[\/panel]\r\n[panel header=\"Near-Memory System Services\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PI:<\/strong> Babak Falsafi<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft Co-PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/svolos\/\">Stavros Volos<\/a><\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nNear-memory processing (NMP) is a promising approach to satisfy the performance requirements of modern datacenter services at a fraction of modern infrastructure\u2019s power. NMP leverages emerging die-stacked DRAM technology, which (a) delivers high-bandwidth memory access, and (b) features a logic die, which provides the opportunity for dramatic data movement reduction \u2013 and consequently energy savings \u2013 by pushing computation closer to the data. In the precursor to this project (the MSR European PhD Scholarship), we evaluated algorithms suitable for database join operators near memory. We showed, while sort join has been conventionally thought of as inferior to hash join in performance for CPUs, near-memory processing favors sequential over random memory access, making sort join superior in performance and efficiency as a near-memory service. In this project, we propose to answer the following questions:\r\n<ul>\r\n \t<li>What data-specific functionality should be implemented near memory (e.g., data filtering, data reorganization, data fetch)?<\/li>\r\n \t<li>What ubiquitous, yet simple system-level functionality should be implemented near memory (e.g., security, compression, remote memory access)?<\/li>\r\n \t<li>How should the services be integrated with the system (e.g., how does the software use them)?<\/li>\r\n \t<li>How do we employ near-threshold logic in near-memory processing?<\/li>\r\n<\/ul>\r\n[\/panel]\r\n[panel header=\"Revisiting Transactional Computing on Modern Hardware\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> Rachid Guerraoui, Georgios Chatzopoulos<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft Co-PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/alekd\/\">Aleksandar Dragojevic<\/a><\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nModern hardware trends have changed the way we build systems and applications. Increasing memory (DRAM) capacities at reduced prices make keeping all data in-memory cost-effective, presenting opportunities for high performance applications such as in-memory graphs with billions of edges (e.g. Facebook\u2019s TAO). Non-Volatile RAM (NVRAM) promises durability in the presence of failures, without the high price of disk accesses. Yet, even with this increase in inexpensive memory, storing the data in the memory of one machine is still not possible for applications that operate on TB of data, and systems need to distribute the data and synchronize accesses among machines. This project proposes the design and building of support for high-level transactions on top of modern hardware platforms, using the Structured Query Language (SQL). The important question to be answered is whether transactions can get the maximum benefit of these modern networking and hardware capabilities, while offering a significantly easier interface for developers to work with. This project will require both research in the transactional support to be offered, including the operations that can be efficiently supported, as well as research in the execution plans for transactions in this distributed setting.\r\n[\/panel]\r\n[panel header=\"Towards Resource-Efficient Data Centers\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PI:<\/strong> Florin Dinu<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PIs:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chrisgk\/\">Christos Gkantsidis<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/serleg\/\">Sergey Legtchenko<\/a><\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nThe goal of our project is to improve the utilization of server resources in data centers. Our proposed approach was to attain a better understanding of the resource requirements of data-parallel applications and then incorporate this understanding into the design of more informed and efficient data center (cluster) schedulers. While pursuing these directions we have identified two related challenges that we believe hold the key towards significant additional improvements in application performance as well as cluster-wide resource utilization. We will explore these two challenges as a continuation of our project. These two challenges are: Resource inter-dependency and time-varying resource requirements. Resource inter-dependency refers to the impact that a change in the allocation of one server resource (memory, CPU, network bandwidth, disk bandwidth) to an application has on that application\u2019s need for the other resources. Time-varying resource requirements refers to the fact that over the lifetime of an application its resource requirements may vary. Studying these two challenges together holds the potential for improving resource utilization by aggressively but safely collocating applications on servers.\r\n[\/panel][\/accordion]\r\n<div style=\"height: 40px\"><\/div>\r\n<h3>ETH Zurich projects (2017-2018)<\/h3>\r\n[accordion]\r\n[panel header=\"Data Science with FPGAs in the Data Center\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> Gustavo Alonso<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft Co-PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/eguro\/\">Ken Eguro<\/a><\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nWhile in the first phase of the project we explored the efficient implementation of data processing operators in FPGAs as well as the architectural issues involved in the integration of FPGAs as co-processors in commodity servers, in this new proposal we intend to focus on architectural aspects of in-network data processing. The choice is motivated by the growing gap between the bandwidth and very low latencies that modern networks support and the overhead of ingress and egress from VMs and applications running on conventional CPUs. A first goal is to explore the type of problems and algorithms that can be best run as the data flows through the network so as to be able to exploit the bare wire speed and allow off-loading of expensive computations to the FPGA. A second, but not less important goal, is to explore how to best operate FPGA based accelerators when directly connected to the network and operating independently from the software part of the application. In terms of applications, the focus will remain on data processing (relational, No-SQL, data warehouses, etc.) with the intention of starting to move towards machine learning algorithms at the end of the two-year project. On the network side, the project will work on developing networking protocols suitable to this new configuration and how to combine the network stack with the data processing stack.\r\n[\/panel]\r\n[panel header=\"Enabling Practical, Efficient and Large-Scale Computation Near Data to Improve the Performance and Efficiency of Data Center and Consumer Systems\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PIs:<\/strong> Onur Mutlu, Luca Benini<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft Co-PI:<\/strong> Derek Chiou<\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nToday\u2019s systems are overwhelmingly designed to move data to computation. This design choice goes directly against key trends in systems and technology that cause performance, scalability and energy bottlenecks:\r\n<ul>\r\n \t<li>data access from memory is a key bottleneck as applications become more data-intensive and memory bandwidth and energy do not scale well,<\/li>\r\n \t<li>energy consumption is a key constraint in especially mobile and server systems,<\/li>\r\n \t<li>data movement is very costly in terms of bandwidth, energy and latency, much more so than computation.<\/li>\r\n<\/ul>\r\nOur goal is to comprehensively examine the premise of adaptively performing computation near where the data resides, when it makes sense to do so, in an implementable manner and considering multiple new memory technologies, including 3D-stacked memory and non-volatile memory (NVM). We will examine practical hardware substrates and software interfaces to accelerate key computational primitives of modern data-intensive applications in memory, runtime and software techniques that can take advantage of such substrates and interfaces. Our special focus will be on key data-intensive applications, including deep learning, neural networks, graph processing, bioinformatics (DNA sequence analysis and assembly), and in-memory data stores. Our approach is software\/hardware cooperative, breaking the barriers between the two and melding applications, systems and hardware substrates for extremely efficient execution, while still providing efficient interfaces to the software programmer.\r\n[\/panel]\r\n[panel header=\"Human-Centric-Flight II: End-user Design of High-level Robotic Behavior\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> Otmar Hilliges<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft Co-PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nMicro-aerial vehicles (MAVs) have been made accessible to end-users via the emergence of simple to use hardware and programmable software platforms and have seen a surge in consumer and research interest as a consequence. Clearly there is a desire to use such platforms in a variety of application scenarios but manually flying quadcopters remains a surprisingly hard task even for expert users. More importantly, state-of-the-art technologies offer only very limited support for users who want to employ MAVs to reach a certain high-level goal. This is maybe best illustrated by the currently most successful application area \u2013 that of aerial videography. While manual flight is hard, piloting and controlling a camera simultaneously is practically impossible. An alternative to manual control is offered via waypoint based control of MAVs, shielding novices from the underlying complexities. However, this simplicity comes at the cost of flexibility and existing flight planning tools are not designed with high-level user goals in mind. Building on our own (MSR JRC funded) prior work, we propose an alternative approach to robotic motion planning. The key idea is to let the user work in solution-space \u2013 instead of defining trajectories the user would define what the resulting output should be (e.g., shot composition, transitions, area to reconstruct). We propose an optimization-based approach that takes such high-level goals as input and generates the trajectories and control inputs for a gimbal mounted camera automatically. We call this solution-space driven, inverse kinematic motion planning. Defining the problem directly in the solution space removes several layers of indirection and allows users to operate in a more natural way, focusing only on the application specific goals and the quality of the final result, whereas the control aspects are entirely hidden.\r\n[\/panel]\r\n[panel header=\"Tractable by Design\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PIs:<\/strong> Thomas Hofmann, Aur\u00e9lien Lucchi<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft Co-PI:<\/strong> Sebastian Nowozin<\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nThe past decade has seen a growth in application of big data and machine learning systems. Probabilistic models of data are theoretically well understood and in principle provide an optimal approach to inference and learning from data. However, for richly structured data domains such as natural language and images, probabilistic models are often computationally intractable and\/or have to make strong conditional independence assumptions to retain computational as well as statistical efficiency. As a consequence, they are often inferior in predictive performance, when compared to current state-of-the-art deep learning approaches. It is a natural question to ask, whether one can combine the benefits of deep learning with those of probabilistic models. The major conceptual challenge is to define deep models that are generative, i.e. that can be thought of as models of the underlying data generating mechanism. We thus propose to leverage and extend recent advances in generative neural networks to build rich probabilistic models for structured domains such as text and images. The extension of efficient probabilistic neural models will allow us to represent complex and multimodal uncertainty efficiently. To demonstrate the usefulness of the developed probabilistic neural models we plan to apply them to challenging multimodal applications such as creating textual descriptions for images or database records.\r\n[\/panel][\/accordion]\r\n<div style=\"height: 40px\"><\/div>\r\n<h3>EPFL projects (2014-2016)<\/h3>\r\n[accordion]\r\n[panel header=\"Authenticated Encryption: Security Notions, Constructions, and Applications\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PI:<\/strong> Serge Vaudenay<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong> Markulf Kohlweiss<\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nFor an encryption scheme to be practically useful, it must deliver on two complementary goals: the confidentiality and integrity of encrypted data. Historically, these goals were achieved by combining separate primitives, one to ensure confidentiality and another to guarantee integrity. This approach is neither the most efficient (for instance, it requires processing the input stream at least twice), nor does it protect against implementation errors. To address these concerns, the notion of Authenticated Encryption (AE), which simultaneously achieves confidentiality and integrity, was put forward as a desirable first-class primitive to be exposed by libraries and APIs to the end developer. Providing direct access to AE rather than requiring developers to orchestrate calls to several lower-level functions is seen as a step towards improving the quality of security-critical code. An indication of both the importance of useable AE and the difficulty of getting it right, are the number of standards that were developed over the years. These specified different methods for AE: the CCM method is specified in IEEE 802.11i, IPsec ESP, and IKEv2; the GCM method is specified in NIST SP 800-38D; the EAX method is specified in ANSI C12.22; and ISO\/IEC 19772:2009 defines six methods, including five dedicated AE designs and one generic composition method, namely Encrypt-then-MAC. Several security issues have recently arisen and been reported in the (mis)use of symmetric key encryption with authentication in practice. As a result, the cryptographic community has initiated the Competition for Authenticated Encryption: Security, Applicability, and Robustness (CAESAR), to boost public discussions towards a better understanding of these issues, and to identify a portfolio of efficient and secure AE schemes. Our project aims to contribute to the design, analysis, evaluation, and classification of the emerging AE schemes during the CAESAR competition. It has effected many practical security protocols that use AE schemes as indispensable underlying primitives. Our work has broader implications for the theory of AE as an important research area in symmetric-key cryptography.\r\n[\/panel]\r\n[panel header=\"Scale-Out NUMA\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PIs:<\/strong> <a href=\"https:\/\/people.epfl.ch\/edouard.bugnion\" target=\"_blank\" rel=\"noopener\">Edouard Bugnion<\/a>, Babak Falsafi<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong> Dushyanth Naraya<\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nThe goal of the Scale-Out NUMA project is to deliver energy-efficient, low-latency access to remote memory in datacentre applications, with a focus on rack-scale deployments. Such infrastructure will become critical for both web-scale only applications as well as scale-out analytics where the dataset can reside in the collective (but distributed) memory of a cluster of servers. Our approach to the problem layers an RDMA-inspired programming model directly on top of a NUMA fabric via stateless messaging protocol. To facilitate interactions between the application, the OS and the fabric, soNUMA relies on the remote memory controller \u2013 a new architecturally-exposed hardware block integrated into the node\u2019s local coherence hierarchy.\r\n[\/panel]\r\n[panel header=\"Towards Resource-Efficient Data Centers\"]\r\n<p style=\"margin-bottom: 0px\"><strong>EPFL PI:<\/strong> Florin Dinu<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/serleg\/\">Sergey Legtchenko<\/a><\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nOur vision is of resource-efficient datacenters where the compute nodes are fully utilized. We see two challenges to manifesting this vision. The first is the increasing use of hardware heterogeneity in datacenters. Heterogeneity, while both unavoidable and desirable, does not lend itself to today\u2019s systems and algorithms, which inefficiently handle heterogeneity. The second challenge is the aggressive scale-out of datacenters. Scale-out has made it conveniently easy to disregard inefficiencies at the level of individual compute nodes because it has been historically easy to expand to new resources. However, apart from being unnecessarily costly, such scale-out techniques are now becoming impractical due to the size of the datasets. Moreover, scale-out often adds new inefficiencies. We argue that to meet these challenges, we must start from a thorough understanding of the resource requirements of today\u2019s datacenter jobs. With this understanding, we aim to design new scheduling techniques that efficiently use resources, even in heterogeneous environments. Further, we aim to fundamentally change the way data-parallel processing systems are built and to make efficient compute node resource utilization a cornerstone of their design. Our first goal is to automatically characterize the pattern of memory requirements of data-parallel jobs. Specifically, we want to go beyond the current practices that are interested only in peak memory usage. To better identify opportunities for efficient memory management, more granular information is necessary. Our second goal is to use knowledge of the pattern of memory requirements to design informed scheduling algorithms that manage memory efficiently. The third goal of the project is to design data-parallel processing systems that are efficient in terms of managing memory, not only by understanding task memory requirements, but also by shaping those memory requirements.\r\n[\/panel][\/accordion]\r\n<div style=\"height: 40px\"><\/div>\r\n<h3>ETH Zurich projects (2014-2016)<\/h3>\r\n[accordion]\r\n[panel header=\"ARRID: Availability and Reliability as a Resource for Large-Scale In-Memory Databases on Datacenter Computers\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> <a href=\"https:\/\/inf.ethz.ch\/people\/people-atoz\/person-detail.hoefler.html\" target=\"_blank\" rel=\"noopener\">Torsten Hoefler<\/a><\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mcastro\/\">Miguel Castro<\/a><\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nDisk-backed in-memory key\/value stores are gaining significance as many industries are moving toward big data analytics. Storage space and query time requirements are challenging, since the analysis has to be performed at the lowest cost to be useful from a business perspective. Despite those cost constraints, today\u2019s systems are heavily overprovisioned when it comes to resiliency. The undifferentiated three-copy approach leads to a potential waste of bandwidth and storage resources, which then makes the overall system less efficient or more expensive. We propose to revisit currently used resiliency schemes, with the help of analytical hardware failure models. We will utilize those models to capture the exact tradeoff between the overhead due to replication and the exact resiliency requirements that are defined in a contract. Our key idea is to model reliability as an explicit resource that the user allocates consciously. In previous work, we have been able to speed-up scientific computing applications, as well as a distributed hashtable, on several hundred-thousand cores by more than 20 percent, with the use of advanced RDMA programming techniques. We have also demonstrated low-cost resiliency schemes based on erasure coding for RDMA environments. In addition, we propose to apply our experience with large-scale RDMA programming to the design of in-memory databases, a problem very similar to distributed hashtables. To make reliability explicit, we plan to extend the key value store with explicit reliability attributes that allow the user to specify reliability and availability requirements for each key (or group of keys). Our work may change the perspective in datacenter resiliency. Defining fine-grained, per-object resiliency levels and tuning them to the exact environment may provide large cost benefits and impact industry. For example, changing the standard three-replica scheme to erasure coding can easily save 30 percent of storage expenses.\r\n[\/panel]\r\n[panel header=\"Efficient Data Processing Through Massive Parallelism and FPGA-Based Acceleration\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> Gustavo Alonso<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/eguro\/\">Ken Eguro<\/a><\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nOne of the biggest challenges for software these days is to adapt to the rapid changes in hardware and processor architecture. On the one hand, extracting performance from modern hardware requires dealing with increasing levels of parallelism. On the other hand, the wide variety of architectural possibilities and multiplicity of processor types raise many questions in terms of the optimal platform for deploying applications. In this project we will explore the efficient implementation of data processing operators in FPGAs, as well as the architectural issues involved in the integration of FPGAs as co-processors in commodity servers. The target application is big data and data processing engines (relational, No-SQL, data warehouses, etc.). Through this line of work, the project aims at exploring architectures that will result in computing nodes with a smaller energy consumption and physical size, but capable of providing a performance boost to applications for big data. FPGAs should be seen here not as a goal in themselves, but as an enabling platform for the exploration of different architectures and levels of parallelism that will allow us to bypass the inherent restriction of conventional processors. On the practical side, the project will focus on both the use of FPGAs as co-processors inside existing engines, as well as on developing proof-of-concept implementations of data processing engines entirely implemented in FPGAs. In this area, the project complements very well with ongoing efforts at Microsoft Research around Cipherbase, a trusted computing system based on SQL server deployments in the cloud. On the conceptual side, the project will explore the development of data structures and algorithms capable of exploiting the massive parallelism available in FPGAs, with a view to gaining much needed insights on how to adapt existing data processing systems to multi- and many-core architectures. Here, we expect to gain insights on how to redesign both standard relational data operators, as well as data mining and machine learning operators, to better take advantage of the increasing amounts of parallelism available in future processors. [\/panel]\r\n[panel header=\"Human-Centric Flight: Micro-Aerial Vehicles (MAVs) for Interaction, Videography, and 3D Reconstruction\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PIs:<\/strong> Otmar Hilliges,\u00a0Marc Pollefeys<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong> Shahram Izadi<\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nIn recent years, robotics research has made tremendous progress and it is becoming conceivable that robots will be as ubiquitous and irreplaceable in our daily lives as they are within industrial settings. Continued improvements, in terms of mechatronics and control aspects, coupled with continued advances in consumer electronics, have made robots ever smaller, autonomous, and agile. One area of recent advances in robotics is the notion of micro-aerial vehicles (MAVs) [14, 16]. These are small, flying robots that are very agile, can operate in a 3D space, indoors and outdoors, and can carry small payloads \u2014 including input and output devices \u2014 and can navigate difficult environments, such as stairs, more easily than terrestrial robots; and hence can reach locations that no other robot or indeed humans can reach. Surprisingly, to date there is little research on such flying robots in an interactive context or on MAVs operating in near proximity to humans. In our project, we explore the opportunities that arise from aerial robots that operate in close proximity to and in collaboration with a human user. In particular, we are interested in developing a robotic platform in which a) the robot is aware of the human user and can navigate relative to the user; b) the robot can recognize various gestures from afar, as well as receive direct, physical manipulations; c) the robot can carry small payloads \u2014 in particular input and output devices such as additional cameras or projectors. Finally, we are developing novel algorithms to track and recognize user input, by using the onboard cameras, in real-time and with very low-latency, to build on the now substantial body of research on gestural and natural interfaces. Gesture recognition can be used for MAV control (for example, controlling the camera) or to interact with virtual content.\r\n[\/panel]\r\n[panel header=\"Software-Defined Networks: Algorithms and Mechanisms\"]\r\n<p style=\"margin-bottom: 0px\"><strong>ETH Zurich PI:<\/strong> Roger Wattenhofer<\/p>\r\n<p style=\"margin-bottom: 0px\"><strong>Microsoft PI:<\/strong> Ratul Mahajan<\/p>\r\n<strong>PhD Students:\u00a0<\/strong>\r\n\r\nThe Internet is designed as a robust service to ensure that we can use it with selfish participants present. As such, a loss in total performance must be accepted. However, if a whole wide-area network (WAN) was controlled by a single entity, why should one use the very techniques designed for the Internet? Large providers such as Microsoft, Amazon, or Google operate their own WANs, which cost them hundreds of millions of dollars per year; yet even their busier links average only 40\u201360 percent utilization. This gives rise to Software Defined Networks (SDNs), which allow the separation of the data and the control plane in a network. A centralized controller can install and update rules all over the WAN, to optimize its goals. Despite SDNs receiving a lot of attention in both theory and practice, many questions are still unanswered. Even though the control of the network is centralized, distributing the updates does not happen instantaneously. Numerous problems can occur, such as the dropping of packets, generation of loops, breaking the memory\/bandwidth limit of switches\/links, and missing packet coherence. These problems must be solved before SDNs can be broadly deployed. This research project sheds more light on these fundamental issues of SDNs and how they can be tackled. In parallel, we look at SDNs from a game-theoretic perspective.\r\n[\/panel]\r\n[\/accordion]"},{"id":1,"name":"Blogs & podcasts","content":"[row] [card title=\"Democratizing data, thinking backwards and setting North Star goals with Dr. Donald Kossmann\" url=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/democratizing-data-thinking-backwards-and-setting-north-star-goals-with-dr-donald-kossmann\/\" ]Dr. Donald Kossmann is a Distinguished Scientist who thinks big, and as the Director of Microsoft Research\u2019s flagship lab in Redmond, it\u2019s his job to inspire others to think big, too. But don\u2019t be fooled. For him, thinking big involves what he calls thinking backwards, a framework of imagining the future, defining progress in reverse order...&lt;br\/&gt;\r\n<div style=\"height: 10px\"><\/div>\r\n<span style=\"font-size: medium\">Microsoft Research Podcast | February 19, 2020<\/span>[\/card] [card title=\"Holograms, spatial anchors and the future of computer vision with Dr. Marc Pollefeys\" url=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/holograms-spatial-anchors-and-the-future-of-computer-vision-with-dr-marc-pollefeys\/\" ]On today\u2019s podcast, Dr. Pollefeys brings us up to speed on the latest in computer vision research, including his innovative work with Azure Spatial Anchors, tells us how devices like Kinect and HoloLens may have cut their teeth in gaming, but turned out to be game changers for both research and industrial applications....&lt;br\/&gt;\r\n<div style=\"height: 10px\"><\/div>\r\n<span style=\"font-size: medium\">Microsoft Research Podcast | April 10, 2019<\/span>[\/card] [card title=\"Launching a new round of projects in the Swiss JRC \u2013 Research with impact\" url=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/launching-a-new-round-of-projects-in-the-swiss-jrc-research-with-impact\/\" ]January 31, 2019 marked the start of the sixth workshop for the Swiss Joint Research Center (Swiss JRC), an innovative and successful collaborative research engagement between Microsoft Research and the two universities that make up the Swiss Federal Institutes of Technology\u2014ETH Zurich (ETH) and EPFL....&lt;br\/&gt;\r\n<div style=\"height: 10px\"><\/div>\r\n<span style=\"font-size: medium\">Microsoft Research Blog | February 5, 2019<\/span>[\/card] [card title=\"Microsoft continues to invest in research cooperation with ETH and EPFL, thereby strengthening Switzerland\u2019s innovation power\" url=\"https:\/\/news.microsoft.com\/de-ch\/2018\/11\/01\/microsoft-continues-to-invest-in-research-cooperation-with-eth-and-epfl-thereby-strengthening-switzerlands-innovation-power\/\" ]The long-standing collaboration with ETH Zurich and EPF Lausanne is of great importance to Microsoft. At an event in Zurich, Satya Nadella, CEO of Microsoft, said: \"For us at Microsoft, it\u2019s not just about the products we create...&lt;br\/&gt;\r\n<div style=\"height: 10px\"><\/div>\r\n<span style=\"font-size: medium\">Microsoft Switzerland | November 1, 2018<\/span>[\/card] [card title=\"ETH, EPFL and Microsoft are extending their research collaboration\" url=\"https:\/\/news.microsoft.com\/de-ch\/2018\/10\/29\/eth-epfl-and-microsoft-are-extending-their-research-collaboration\/\" ]ETH, EPFL and Microsoft are extending their highly successful research collaboration, which has been known as the Swiss Joint Research Center since its inception in 2013. The projects selected for the upcoming third research phase will be announced at the beginning of 2019. The research groups work at the ETH in Zurich and at the EPF in Lausanne.&lt;br\/&gt;\r\n<div style=\"height: 10px\"><\/div>\r\n<span style=\"font-size: medium\">Microsoft Switzerland | October 29, 2018<\/span>[\/card] [card title=\"Will machines one day be as creative as humans?\" url=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/will-machines-one-day-creative-humans\/\" ]Recent methods in artificial intelligence enable AI software to produce rich and creative digital artifacts such as text and images painted from scratch. One technique used in creating these artifacts are generative adversarial networks (GANs). Today at NIPS 2017, researchers from Microsoft Research and ETH Zurich present their work on making GAN models more robust and practically useful.&lt;br\/&gt;\r\n<div style=\"height: 10px\"><\/div>\r\n<span style=\"font-size: medium\">Microsoft Research Blog | December 4, 2017<\/span>[\/card] [card title=\"From improving a golf swing to reducing energy in datacenters\" url=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/2017-swiss-joint-research-center\/\" ]Recently, we celebrated an important milestone for our Swiss Joint Research Center (Swiss JRC). We welcomed top researchers from all partners to a workshop at the Microsoft Research Cambridge Lab, to kick off a new phase in our collaboration. This workshop represented the end of a busy 10-month period for the Swiss JRC during which we ramped down projects from...&lt;br\/&gt;\r\n<div style=\"height: 10px\"><\/div>\r\n<span style=\"font-size: medium\">Microsoft Research Blog | February 21, 2017<\/span>[\/card] [card title=\"From flying robots to energy-efficient memory systems\" url=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/from-flying-robots-to-energy-efficient-memory-systems\/\" ]Today, February 5, 2014, marked the kickoff workshop for the Swiss Joint Research Center (Swiss JRC), a collaborative research engagement between Microsoft Research and the two universities that make up the Swiss Federal Institutes of Technology: ETH Z\u00fcrich (Eidgen\u00f6ssische Technische Hochschule Z\u00fcrich, which serves German-speaking students) and EPFL...&lt;br\/&gt;\r\n<div style=\"height: 10px\"><\/div>\r\n<span style=\"font-size: medium\">Microsoft Research Blog | February 5, 2014<\/span>[\/card] [\/row]"},{"id":2,"name":"In the news","content":"Don't read German? <a href=\"https:\/\/www.microsoft.com\/en-us\/p\/translator\/9wzdncrfj3pg?activetab=pivot:overviewtab\" target=\"_blank\" rel=\"noopener\"><em>Download Microsoft Translator<\/em><\/a>\r\n\r\n<a href=\"https:\/\/actu.epfl.ch\/news\/ultrafast-optical-switching-can-save-overwhelmed-d\/\" target=\"_blank\" rel=\"noopener\">Ultrafast optical switching can save overwhelmed datacenters<\/a>\r\nEPFL News | October, 2021\r\n\r\n<a href=\"https:\/\/www.inside-it.ch\/de\/post\/mixed-reality-brillen-werden-die-smartphones-verdraengen-20210601\" target=\"_blank\" rel=\"noopener\">Mixed-Reality-Brillen werden die Smartphones verdr\u00e4ngen<\/a> (German)\r\nInside It | June 1, 2021\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/QuantumComputing.pdf\" target=\"_blank\" rel=\"noopener\">Smarte Brille fuer jedermann<\/a> (German, PDF)\r\nBilanz | Das Schweizer Wirtschaftsmagazin | February 2020\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/MarcPollefeys.pdf\" target=\"_blank\" rel=\"noopener\">Ein Quantum Hype<\/a> (German, PDF)\r\nBilanz | Das Schweizer Wirtschaftsmagazin | February 2020\r\n\r\n<a href=\"https:\/\/ethz.ch\/en\/the-eth-zurich\/global\/eth-global-news-events\/2019\/12\/demystifying-quantum.html\" target=\"_blank\" rel=\"noopener\">Demystifying Quantum \u2013 Panel discussion at WEF Annual Meeting 2020<\/a>\r\nETH Zurich | January 21, 2020\r\n\r\n<a href=\"https:\/\/news.microsoft.com\/de-ch\/2019\/10\/03\/lab-opening\/\" target=\"_blank\" rel=\"noopener\">Mixed Reality &amp; AI Zurich Lab \u2013 a Lab is born<\/a>\r\nMicrosoft News Center - Switzerland | October 3, 2019\r\n\r\n<a href=\"https:\/\/blogs.microsoft.com\/on-the-issues\/2019\/09\/26\/cyberpeace-institute-fills-a-critical-need-for-cyberattack-victims\" target=\"_blank\" rel=\"noopener\">CyberPeace Institute fills a critical need for cyberattack victims<\/a>\r\nMicrosoft Blog | September 26, 2019\r\n\r\n<a href=\"https:\/\/www.nzz.ch\/digital\/dna-microsoft-automatisiert-die-datenspeicherung-in-biomolekuelen-ld.1470792\" target=\"_blank\" rel=\"noopener\">DNA as storage medium of the future<\/a> (German)\r\nNeue Z\u00fcrcher Zeitung | March 30, 2019\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/Tages-Anzeiger_article_03_2019.pdf\" target=\"_blank\" rel=\"noopener\">Smartglasses will replace the mobile phone<\/a> (German, PDF)\r\nTages-Anzeiger | March 13, 2019\r\n\r\n<a href=\"https:\/\/www.youtube.com\/watch?v=Qd9wpIkSMsQ&amp;feature=youtu.be\" target=\"_blank\" rel=\"noopener\">Man and Machine - Panel discussion at WEF Annual Meeting 2019<\/a>\r\nETH Zurich | February 12, 2019\r\n\r\n<a href=\"https:\/\/www.startupticker.ch\/en\/video\/microsoft-ceo-satya-nadella-at-eth-zurich\" target=\"_blank\" rel=\"noopener\">Microsoft CEO Satya Nadella at ETH Zurich<\/a>\r\nStartup Ticker | November 1, 2018\r\n\r\n<a href=\"https:\/\/www.ethz.ch\/de\/news-und-veranstaltungen\/eth-news\/news\/2018\/11\/microsoft-chef-nadella-an-der-eth-zu-gast.html\" target=\"_blank\" rel=\"noopener\">ETH and Microsoft: Hunting for talent<\/a> (German)\r\nETH News | November 1, 2018\r\n\r\n<a href=\"https:\/\/www.inside-channels.ch\/articles\/52720\" target=\"_blank\" rel=\"noopener\">Microsoft expands partnership with ETH Zurich<\/a> (German)\r\nInside IT &amp; Inside Channels | November 1, 2018\r\n\r\n<a href=\"https:\/\/www.netzwoche.ch\/news\/2018-11-02\/microsoft-und-eth-machen-jagd-auf-talente\" target=\"_blank\" rel=\"noopener\">Microsoft and ETH are chasing talent<\/a> (German)\r\nNetzwoche | November 2, 2018\r\n\r\n<a href=\"https:\/\/www.computerworld.ch\/business\/forschung\/microsoft-chef-satya-nadella-zu-gast-an-eth-zuerich-1597308.html\" target=\"_blank\" rel=\"noopener\">Microsoft CEO Satya Nadella visits ETH Zurich<\/a> (German)\r\nComputerworld | November 2, 2018\r\n\r\n<a href=\"https:\/\/www.telecompaper.com\/news\/microsoft-extends-cooperation-with-eth-epfl-partners-with-mixed-reality-and-ai-zurich-lab--1267582\" target=\"_blank\" rel=\"noopener\">Microsoft extends cooperation with ETH, EPFL, partners with Mixed Reality &amp; AI Zurich Lab<\/a>\r\nTelecompaper | November 2, 2018\r\n\r\n&nbsp;\r\n\r\nSee more at the <a href=\"https:\/\/news.microsoft.com\/de-ch\/\">Microsoft Swiss Newsroom<\/a>\u00a0and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/mixed-reality-ai-zurich\/news-and-awards\/\">Mixed Reality &amp; AI Zurich &gt;<\/a>"},{"id":3,"name":"Request for Proposals","content":"[row][column class=\"m-col-24-24\"]\r\n<p style=\"margin-bottom: 17px\">We invite submissions for collaborative research projects between ETH Zurich\/EPFL and Microsoft, including <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/\">Microsoft Research<\/a> and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/mixed-reality-ai-zurich\/\">Mixed Reality &amp; AI Labs in Zurich<\/a> and <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/lab\/mixed-reality-ai-lab-cambridge\/\">Cambridge<\/a>, until September 17, 2021. \u00a0For more details, see\u00a0<a class=\"\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/swiss-joint-research-center-rfp\/\" target=\"_blank\" rel=\"noopener\">Swiss JRC RFP.<\/a><\/p>"}],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group\/611553","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-group"}],"version-history":[{"count":109,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group\/611553\/revisions"}],"predecessor-version":[{"id":1122771,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group\/611553\/revisions\/1122771"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/665967"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=611553"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=611553"},{"taxonomy":"msr-group-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group-type?post=611553"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=611553"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=611553"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}