Latin American Faculty Summit 2016

Latin American Faculty Summit 2016

Home

Microsoft hosted the tenth annual Microsoft Research Latin American Faculty Summit in Rio de Janeiro, Brazil, from May 18 to 20, 2016. Every other year, we explore with invited researchers how innovations in computing, fueled by academic and industry research, can rise to meet the regional-scale challenges of today. With every summit, the magnitude of regional-scale and the capabilities of computing advance dramatically, introducing new challenges and new possibilities. This year, the summit focused on Artificial Intelligence.

The need to solve real-world problems—whether they be economic, scientific, or social— has motivated and prompted technological advancement throughout history. These advancements have the power to transform our society, introducing new innovations in healthcare, education, commerce and the environment.

Over the last five years, the world of Artificial Intelligence has grown exponentially—investments have increased and new progress has been uncovered. At the summit, we examined how Artificial Intelligence can complement, match, or even surpass human intelligence. Additionally, we explored how the future of Artificial Intelligence could improve both individual lives and society as a whole.

Jaime Puente

Jaime Puente
Director Academic Outreach

General Chair

Evelyne Viegas

Evelyne Viegas
Director of AI
Outreach

Program Chair

Roy Zimmerman

Roy Zimmermann
Director, Microsoft Research Outreach

Research Showcase Chair

Agenda

Wednesday, May 18

Time Session Speaker Location
12:00
Opening Ceremony – WATCH
  • Paula Bellizia, General Manager, Microsoft Brazil
  • Rico Malvar, Chief Scientist, Microsoft Research
  • Gustavo Tutuca, State Secretary of Science, Technology & Innovation
Bromélia II & III
12:30
Recent advances in Information Technology – WATCH
Chair: Evelyne Viegas, Microsoft Research
Speaker: Rico Malvar, Chief Scientist, Microsoft Research
Bromélia II & III
13:30
Group photo and lunch
Hilton Restaurant
15:00
Panel on Technological Innovation in Brazil
Moderator: Rico Malvar, Microsoft Research
Panelists:
  • Celso Massaki Hirata, Instituto Tecnologico de Aeronautica (ITA)
  • Roberto Boisson de Marca, Federal University of Rio de Janeiro (UFRJ)
  • Gustavo Tutuca, State Secretary of Science, Technology & Innovation
Bromélia II & III
16:00
Computational Modeling in Medicine: Some Recent Results and Future Perspective
Chair: Jaime Puente, Microsoft Research
Speaker: Carlos Eduardo Pedreira, Federal University of Rio de Janeiro (UFRJ)
Bromélia II & III
17:00
Refreshment break
Bromélia Foyer
17:30
Physically Situated Dialog: Opportunities and Challenges for Integrative Artificial Intelligence
Chair: Cha Zhang, Microsoft Research
Speaker: Dan Bohus, Microsoft Research
Bromélia II & III
20:00
Welcome reception and dinner
Plaza and Hilton Restaurant

 

Thursday, May 19

Time Session Speaker Location
8:30
Open source software and industry: exploring the reality – WATCH
Chair: Michael Zyskowski, Microsoft Research
Speaker: Judith Bishop, Microsoft Research
Bromélia II & III
9:30
Intelligent Vision Technologies
Chair: Leonardo Nunes, Microsoft Research
Speakers:

  • On the automatic detection of abandoned objects | Sergio Lima Netto, Federal University of Rio de Janeiro (UFRJ)
  • Hand Tracking | Jonathan Taylor, Microsoft Research
  • Emotion Recognition from Images in the Wild | Cha Zhang, Microsoft Research
Nogueira I
Intelligent Language and Speech Technologies
Chair: Sunayana Sitaram, Microsoft Research
Speakers:

  • Auto-Captioning and Translation in the Classroom: Breaking Down the Language and Hearing Barriers | Will Lewis, Microsoft Research
  • Natural Language Queries and Auto-Suggest over Knowledge Graphs | Alex Wade, Microsoft Research
  • High Performance Image Captioning | Geoffrey Zweig, Microsoft Research
Bromelia II & III
Data and Code at your Fingertips
Chair: Roy Zimmermann, Microsoft Research
Speakers:
  • The BBC micro:bit: a programming device for the new generation | Jonathan Protzeko, Microsoft Research
  • Insight from Interaction with Data | Dave Brown, Microsoft Research
  • Methods and Measures: Real world implications of eye-gaze communication systems | Jon Campbell, Microsoft Research
Nogueira II
11:00
Refreshment break
Bromélia Foyer
11:30
CNTK: Microsoft’s Open-Source Deep-Learning Toolkit
Chair: Katja Hofmann, Microsoft Research
Speaker: Frank Seide, Microsoft Research
Bromélia II & III
12:30
Demo madness: a sneak peek at what to see at the Research Showcase
Chair: Roy Zimmermann, Microsoft Research
Bromélia II & III
13:00
Lunch
Hilton Restaurant
14:30
Research Showcase
Chair: Roy Zimmermann, Microsoft Research
Bromelia I
Spatial Audio for Augmented & Virtual Reality
Ivan Tashev and David Johnston
Booth 1
Microsoft Academic
Alex Wade
Booth 2
Microsoft Cognitive Services
Geoff Zweig
Booth 3
Emotion Recognition
Cha Zhang
Booth 4
Microsoft Translator
Will Lewis
Booth 5
NUI Graph
Dave Brown
Booth 6
Project Malmo
Katja Hofmann
Booth 7
Project Premonition
Michael Zyskowski
Booth 8
Audio and Video Processing at the SMT Lab, UFRJ
Sergio Lima Netto, Lucas Maia, and Jose F. L. de Oliveria
Booth 9
Micro:bit
Jonathan Protzenko
Booth 10
Machine Learning for non-experts: Platform for Interactive Concept Learning (PICL)
Carlos Garcia Jurado Suarez
Booth 11
Real-time Event Detection in Video
Leonardo Nunes
Booth 12
EchoSense Project
Ricardo Sabedra and Witallo Oliveira
Booth 13
Ability Eye Gaze
Jon Campbell
Booth 14
Open Source Software
Judith Bishop
Booth 15
Project Melange: Translating Code-mixed Tweets
Sunayana Sitaram
Booth 16
Interaction through Hand Tracking
Jonathan Taylor
Booth 17
 16:30
Refreshment break
Bromélia Foyer
 17:00
Perspectives on Health Intelligence
Chair: Leila Pontes, FAPERJ
Speakers:

  • Innovation model, challenges and opportunities in a leading healthcare organization | Claudio Terra, Einstein Hospital
  • Impact of Biomedical Imaging on Healthcare | Marcel Jackowski, University of Sao Paulo (USP)
  • Project Premonition: Preventative Monitoring of Infectious Agents | Michael Zyskowski, Microsoft Research
Bromélia II & III
 19:45
Social event
Meet in Hilton Lobby
 20:00
Dinner show
Churrascaria Fogo de Chão

Friday, May 20

Time Session Speaker Location
8:30
Microsoft Academic: New applications and research opportunities – WATCH
Chair: Geoffrey Zweig, Microsoft Research
Speaker: Alex Wade, Microsoft Research
Bromélia II & III
9:30
Machine Learning Advancing Artificial Intelligence
Chair: Will Lewis, Microsoft Research
Speakers:

  • Learning Reusable Skills and Behavioral Hierarchies | Bruno Castro da Silva, Institute of Informatics of the Federal University of Rio Grande do Sul (UFRGS)
  • Machine Learning made easy: Platform for Interactive Concept Learning (PICL) | Carlos Garcia Jurado Suarez, Microsoft Research
  • Tackling the next big AI challenges | Katja Hofmann, Microsoft Research
Bromélia II & III
Intelligent Devices
Chair: Jonathan Taylor, Microsoft Research
Speakers:

  • Project Torino: A physical programming language inclusive of blind children | Nicolas Villar, Microsoft Research
  • Audio for Intelligent Devices | Ivan Tashev, Microsoft Research
  • Personal Near-field Interaction: Across Devices and Across the Body | Christian Holz, Microsoft Research
Nogueira I
Quantum Computing and AI Tutorial
Chair: Judith Bishop, Microsoft Research
Speakers: Quantum Machine Learning, Nathan Wiebe, Microsoft Research
Nogueira II
11:00
Refreshment break
Bromélia Foyer
11:30
Artificial Intelligence – Microsoft Perspectives
Chair: Jaime Puente, Microsoft Research
Speaker: Evelyne Viegas, Microsoft Research
Bromélia II & III
12:00
Closing Remarks
  • Jaime Puente, Microsoft Research
  • Evelyne Viegas, Microsoft Research
  • Roy Zimmerman, Microsoft Research
Bromélia II & III
12:15
Lunch
Hilton Restaurant
14:00
Event ends

Speakers & Abstracts

Speakers & Presenters

Paula Bellizia, General Manager, Microsoft Brazil

Paula BelliziaPresentation title: Opening ceremony – WATCH

Bio: Paula Bellizia has led the Microsoft Brazil subsidiary, the largest in Latin America, since July 2015. She previously worked at Microsoft from 2002-2012 in different roles.

Her vast industry and market knowledge gives her an excellent background to lead the business in Brazil, during a time of transformation for the company and the way users interact with technology in their everyday lives.

Paula has over 22 years of experience in the market. She started her career in Marketing at Whirlpool in 1992 and after 7 years joined Telefonica as Product Group Manager. She left Telefonica in 2002 to join the technology industry at Microsoft as Small and Medium Business Sales Manager. During her 10 years at Microsoft Paula occupied different roles, most recently as Brazil Marketing & Operations Lead. In 2013 she spent time at Facebook as Small and Medium Business Sales Director for Latin America and most recently she was the Country Manager for Apple in Brazil leading operations for two years.

Paula graduated in Computer and Information Sciences with a post-graduate degree in Marketing, MBA by FIA/USP. She lives in São Paulo with her family.

Judith Bishop, Microsoft Research

Judith BishopDemo presentation: Open Source Software

Presentation title: Open source software and industry: exploring the reality – WATCH

Abstract: Open Source Software (OSS) is a movement that the IT industry has subscribed to with great success over many years. Adopting code that is already a standard is the easy part. Contributing to and initiating new software requires sustained commitment and upfront scrutiny of the return on investment. On the technical side, major software companies experience an added level of complexity in OSS involvement in that the software might not match the platforms they build. Virtual machines and browsers can come to the rescue, with varying degrees of efficiency loss. In this talk we shall survey this landscape, present statistics and examples of some of Microsoft Research’s OSS tools, explore the challenges, and make some predictions as to where the most exciting industry OSS developments will launch in the future.

Bio: Judith Bishop is Director of Computer Science in Microsoft Research, USA. Her role is to create strong links between Microsoft’s research groups and universities globally, through encouraging projects, supporting conferences and engaging directly in research. Recent projects have included TryF#, Touch Develop, Code Hunt and the BBC micro:bit. She now drives the Open Source Initiative. Judith’s research expertise is in programming languages and distributed systems, with a strong practical bias. After completing her degrees at Rhodes and Natal in South Africa, Judith received her PhD from the University of Southampton, UK. She then served as a professor, most recently at the University of Pretoria, South Africa. Judith is an ACM Distinguished Educator, and has received the IFIP Silver Core Award, among others. She is a Fellow of the British Computer Society and the Royal Society of South Africa.

Dan Bohus, Microsoft Research

Dan BohusPresentation title: Physically Situated Dialog: Opportunities and Challenges for Integrative Artificial Intelligence

Abstract: Most research to date on spoken language interaction has focused on supporting dialog with single users in limited domains and contexts. Significant progress in this space has enabled wide-scale deployments of voice-enabled personal assistants. At the same time, important challenges remain largely unaddressed in the realm of physically situated spoken language interaction (e.g., in-car systems, robots in public spaces, ambient assistance). In this talk, I will outline a core set of communicative competencies required for supporting dialog in physically situated settings – such as models of multiparty engagement, turn-taking and interaction planning, and I will present samples of work as part of a broader research agenda in this area. The proposed models and systems harness a diverse set of AI technologies, and throughout the talk I will discuss a number of important opportunities and challenges for developing such integrative AI systems. We evaluate our framework on challenging simulated decision-making problems and on a physical humanoid robot, and we demonstrate that it allows for the efficient and active construction of reusable skills from limited data. Finally, we discuss how the acquisition of reusable skills is key for designing intelligent agents capable of learning increasingly more abstract behaviors and models.

Bio: Dan Bohus is a Senior Researcher in the Adaptive Systems and Interaction Group at Microsoft Research. His research agenda is focused on physically situated, open-world spoken language interaction. Before joining Microsoft Research, Dan has received his Ph.D. degree (2007) in Computer Science from Carnegie Mellon University.

Dr. José Roberto Boisson de Marca, Federal University of Rio de Janeiro (UFRJ)

Dr. José Roberto Boisson de MarcaBio: J. Roberto Boisson de Marca graduated as an Electrical Engineer from PUC-Rio, Brazil and earned a Ph.D. in Electrical Engineering from the University of Southern California, USA. He was the 2014 IEEE President and CEO. He was also the 2000-2001 President of the IEEE Communications Society and the founding President of the Brazilian Telecommunications Society. He is an IEEE Fellow and a full member of both the Brazilian Academy of Sciences and Brazilian National Academy of Engineering. Prof. de Marca was Scientific Director of the Brazilian National Research Council (CNPq) and was a member of FINEP’s Presidential Advisory Board. He held visiting appointments in several organizations including AT&T Bell Laboratories, NEC Research Labs Europe, and Hong Kong University of Science and Technology. Dr. de Marca was selected in 2013 by the Epoca weekly Magazine, as one of the 100 most influential persons in Brazil. In 2014 he received the Personality of the Year in Telecommunications recognition from the IT Industry Association of Brazil.

Dave Brown, Microsoft Research

Dave BrownDemo presentation: NUI Graph

Presentation title: Insight from Interaction with Data

Abstract: Data continues to grow in terms of both size and complexity. Extracting meaningful insights from data can be challenging. Our work focusses on building prototypes for interactive data visualization, and combines Natural User Integration with 3D visualization and storytelling to facilitate finding and sharing insight in data.

Bio: David Brown is a Senior Research Development Engineer in the NextViz team at Microsoft Research. His work focusses on prototyping data visualization software with a focus on natural user interaction.

Jon Campbell, Microsoft Research

Jon CampbellDemo presentation: Ability Eye Gaze

Presentation title: Methods and Measures: Real world implications of eye-gaze communication systems

Abstract: Telemetry is the core of data driven development. As more of our insights come from data, it is important to understand how the type of data we collect can shape what we develop. In this talk we will discuss the various methods and measures for collecting data via the example of an eye-gaze communication system designed for people with severe motor impairment.

Bio: Jon Campbell is a Research Software Development Engineer at Microsoft Research in Redmond, WA, USA. He received BS degrees from Washington State University in Computer Science and Computer Engineering, with emphasis in Electrical Engineering and Mathematics. He then received a MS in Computer Science from Washington State University with a focus on networking and pervasive/ubiquitous computing. After spending nearly 10 years in product groups across Microsoft, he joined MSR in 2015 to focus on using technology to enable those with disabilities.

Bruno Castro da Silva, Institute of Informatics of the Federal University of Rio Grande do Sul (UFRGS)

Bruno Castro de SilvaPresentation title: Learning Reusable Skills and Behavioral Hierarchies

Abstract: One of the defining characteristics of human intelligence is the ability to acquire and refine skills. Skills are behaviors for solving problems that an agent encounters often—sometimes in different contexts and situations—throughout its lifetime. Identifying problems that recur and retaining their solutions as skills allows an agent to more rapidly solve novel problems by adjusting and combining its existing skills. We introduce a reinforcement learning framework for learning reusable skills. Reusable skills are parameterized procedures that produce appropriate behaviors given only a description of the task to be performed. We discuss two important challenges involved in the construction of such skills. First, an agent should be capable of solving a small number of problems and generalizing these experiences to construct a single reusable skill. We achieve this by introducing a method capable of estimating properties of the lower-dimensional manifold on which problem solutions lie. Secondly, the agent should be able to actively select on which problems it wishes to practice in order to more rapidly become competent in a skill. Thoughtful and deliberate practice is one of the defining characteristics of human expert performance. We show how non-parametric models can be used by an agent that wishes to actively decide what to learn.

Bio: Bruno Castro da Silva is a professor at the Institute of Informatics of the Federal University of Rio Grande do Sul. Prior to that he was a postdoctoral associate at the Aerospace Controls Laboratory, at MIT. He received his Ph.D. in Computer Science from the University of Massachusetts, under the supervision of Prof. Andrew Barto. Both his MSc. and B.S. cum laude degrees are in Computer Science from the Federal University of Rio Grande do Sul. Bruno has worked, in several occasions, as a visiting researcher at the Laboratory of Computational Neuroscience, in Rome, Italy, developing novel control algorithms for humanoid robots. He has also worked at Adobe Research, in California, developing large-scale machine learning techniques for digital marketing optimization. Bruno’s research interests lie in the intersection of machine learning, reinforcement learning, optimal control theory, and robotics, and include the construction of hierarchical motor skills, active learning, neural networks, and Bayesian optimization applied to control.

Celso Massaki Hirata, Instituto Tecnologico de Aeronautica (ITA)

Celso HirataBio: Celso Massaki Hirata is a Professor at Computer Science Dept of Instituto Tecnológico de Aeronáutica – ITA. He obtained a BEng in Mechanical Aeronautical Engineering and an MSc in Operations Research from ITA. He earned a Ph.D in Computer Science from Imperial College of Science, Technology, and Medicine. His areas of interest include Distributed Systems, Security, Software Engineering, and CSCW. He has taken part in large-scale projects from Federal Government and Private Companies in Security, Safety, and Communications based on Computational Intelligence.

Katja Hofmann, Microsoft Research

Katja HofmannDemo presentation: Project Malmo

Presentation title: Tackling the next big AI challenges

Abstract: AI has seen dramatic progress in the past years. For example, advances in machine learning are rapidly opening up innovative new applications using speech or object recognition. Despite these advances, a great number of fundamental open questions remain. Can we develop AI that can learn to make sense of complex environments? That continuously adapts and solves novel problems? That can learn to collaborate with human users to help them achieve their goals? This talk outlines open challenges in AI and what it will take to address them – starting from project Malmo, a new platform for AI experimentation.

Bio: Katja Hofmann is a researcher at Microsoft Research Cambridge. As part of the Machine Intelligence and Perception group, she is research lead of Project Malmo. Before joining Microsoft Research, Katja received her PhD in Computer Science from the University of Amsterdam, her MSc in Computer Science from California State University, and her BSc in Computer Science from the University of Applied Sciences in Dresden, Germany. Katja’s main research goal is to develop interactive learning systems. Her dream is to develop AIs that learn to collaborate with human players in Minecraft.

Christian Holz, Microsoft Research

Christian HolzPresentation title: Personal Near-field Interaction: Across Devices and Across the Body

Abstract: Current mobile devices pack a variety of commodity sensors that reveal the presence of surrounding devices. This commoditization of sensors paved the way for users to effortlessly interact across multiple devices, transferring application states from laptops to phones or collaborating with other users in a common application. In this talk, I will present a seamless tracking layer for mobile devices that takes tracking to a spatial level, enabling devices to identify surrounding devices’ locations 3D space—solely by using the sensors on today’s devices without the need for user input. This tracking layer brings cross-device interaction from current stationary setups to mobile scenarios, readily setting it up as a commodity interaction modality. In the second part of my talk, I will discuss how the notion of spatial tracking changes for interaction across wearable devices and switches to the user’s body as a reference system. In the context of cross-device authentication, I will demonstrate how seamless tracking increases both, the convenience as well as the security of use for current devices, solving a long-standing challenge in human-computer interaction. I will conclude with an outlook of seamless spatial tracking for Internet of Things applications.

Bio: Christian Holz is a researcher in the Natural Interaction Group at Microsoft Research in Redmond. His research focuses on augmenting the capabilities of existing mobile devices and creating new devices with enriched sensing capabilities. Before joining Microsoft Research, Christian was a research scientist at Yahoo Labs in California. Christian holds a Ph.D. in Human-Computer Interaction from Hasso Plattner Institute, University of Potsdam, Germany.

Marcel Jackowski, University of São Paulo (USP)

Marcel JackowskiPresentation title: Impact of Biomedical Imaging on Healthcare

Abstract: The development of public healthcare policies that are both effective and affordable requires governments to fluidly quantify and understand health statistics of their given populations. The analysis of medical related data has the potential to portray overall population health as well as improve healthcare policies. However, the vast spectrum of imaging modalities, sheer size, nature of the signals, and noise characteristics inherent in biomedical imaging makes it more difficult to devise generalized computational tools. On the other hand, there is an increased need for extracting quantitative information in a reliable, automated and efficient manner. In this presentation, I will share some new directions on large-scale biomedical image analysis, which, with the aid of predictive analytics will allow the detection and outpace the progression of current and new pathologies.

Bio: Dr. Jackowski is an Assistant Professor in the department of computer science at the University of São Paulo, and manages the medical imaging group. Prior to that, he was postdoctoral fellow and held a research scientist position at Yale University in the department of diagnostic imaging. His research is oriented towards developing scalable biomedical image analysis methods. He has been the principal investigator in several FAPESP and CNPq grants, and collaborates actively with the Athinoula A. Martinos Center for Biomedical Imaging.

David Johnston, Microsoft Research

David JohnstonDemo presentation: Spatial Audio for Augmented & Virtual Reality

Bio: David Johnston received his B.S. degree in Computer Science from the University of Washington in 1992. He is a Principal Software Design Engineer with the Audio and Acoustics Research Group in Microsoft Research Labs, joining in 2011. In the early 1990’s David created Cool Edit, a stereo audio editor for Windows, while previously at Microsoft. In 1995 Mr. Johnston co-founded Syntrillium Software, and developed the multitrack studio audio editor Cool Edit Pro. He sold the company to Adobe Systems in 2003 and continued working on what became Adobe Audition until 2010. David’s current work includes spatial audio for HoloLens and Windows.

Will Lewis, Microsoft Research

Will LewisDemo presentation: Microsoft Translator

Presentation title: Auto-Captioning and Translation in the Classroom: Breaking Down the Language and Hearing Barriers

Abstract: The Science Fiction meme of the Universal Translator, first popularized in Star Trek 50 years ago, may become reality a lot sooner than we expected, fostered primarily by significant advancements in automated speech recognition (ASR) and machine translation (MT). MSR has been at the forefront of adapting speech translation technology to the consumer scenario, namely its integration into the Skype Translator product, enabling millions to make phone calls with other Skype users who do not speak their languages. Going a step further, MSR has exposed the same technology that powers Skype Translator in Microsoft Translator’s API. Speech Translation through a publicly accessible API opens the door to tool developers, academics, and others to adapt speech translation to their scenarios. One of the scenarios we have been working on is to build out the infrastructure to support speech transcription and translation in the classroom. This technology can benefit students in multiple ways: Students who are deaf or hard of hearing benefit from this technology since they can participate in the “hearing” classroom. Students who are non-native speakers of the predominant language where they live benefit from the technology since they can have live transcripts of lectures, video, and other audio used in class. I will review the technologies behind MSR’s Speech-to-Speech API, with a quick overview of the API, and how we are testing the technology in the classroom.

Bio: Dr. William Lewis is Principal Technical Program Manager with the Microsoft Translator team at Microsoft Research. He has led the team’s efforts to build Machine Translation engines for a variety of the world’s languages and has been working with the team to build Skype Translator. This work has been extended to the classroom in Seattle Public Schools, where “mainstreamed” deaf and hard of hearing children are using MSR’s speech recognition technology to participate fully in the “hearing” classroom. Before joining Microsoft, Will was Assistant Professor and founding faculty for the Computational Linguistics Master’s Program at the University of Washington. Will is on the editorial board for the Journal of Machine Translation, on the board for the Association for Machine Translation in the Americas (AMTA), served as a program chair for the National American Association for Computational Linguistics (NAACL) conference, and served as a program chair for the Machine Translation Summit.

Lucas Maia, Serra dos Órgãos Educational Foundation (FESO)

Lucas MaiaDemo presentation: Audio and Video Processing at the SMT Lab, UFRJ

Bio: Lucas Maia is a professor at the Serra dos Órgãos Educational Foundation (FESO) in Teresópolis, Brazil. He received a degree in Electronic and Computing Engineering as well as a Master’s degree in Electrical Engineering from the Federal University of Rio de Janeiro (UFRJ). His main research interests are algorithmic composition and music information retrieval.

Rico Malvar, Chief Scientist, Microsoft Research

Rico MalvarPresentation title: Opening ceremony – WATCH

Presentation title: Recent Advances in Information Technology – WATCH

Abstract: In this talk we present an overview of recent developments in information technology, especially in the areas of computer vision, speech and natural language processing, and new computer interfaces, in particular those developed at Microsoft Research. Many of these technologies are the result of the new developments in computer architecture, machine learning and deep neural networks, and big data.

Bio: Henrique (Rico) Malvar is a Microsoft Distinguished Engineer and the Chief Scientist for Microsoft Research. He currently leads a new team at MSR developing technologies to help people with disabilities. He joined Microsoft Research in 1997, founding the signal processing group, which developed new technologies such as new media compression formats used in Windows, Xbox, and Office, and microphone array processing technologies used in Windows, Xbox Kinect, and HoloLens. Rico was a key architect for the media compression formats WMA and JPEG XR, and made key contributions to the H.264 video format (used by Skype, Netflix, YouTube, etc.). Rico received a Ph.D. from MIT (1986) and is a Member of the US National Academy of Engineering. He has over 115 US patents and over 160 publications. He is an IEEE Fellow and has received many awards, including the Technical Achievement Award from the IEEE Signal Processing Society in 2002.

Sergio Lima Netto, Federal University of Rio de Janeiro (UFRJ)

Sergio Lima NettoDemo presentation: Audio and Video Processing at the SMT Lab, UFRJ

Presentation title: On the automatic detection of abandoned objects

Abstract: We describe two signal-processing strategies for attacking the problemof detecting abandoned objects in videos acquired with a moving camera. In the first solution, after time and geometric alignment procedures, a multiscale similarity analysis is performed between reference and target videos. In the second strategy, the referencevideo is used to generate a (bi)sparse description of the target video, and the abandoned objects are identified as high-energy regions on the final error image. We illustrate the application of both solutions in the real-time inspection of an industrial plant using a robotic system.

Bio: Sergio L. Netto has received the BSc and MSc from the Federal Univ. of Rio de Janeiro and the PhD from the University of Victoria, Canada, in Electrical Engineering. He is the co-author of Digital Signal Processing: System Analysis and Design, by Cambridge Univ. Press, 2nd ed., 2010. His research and teaching interests include adaptive signal processing, applied digital signal processing, information theory, applied machine learning, and computer vision.

Leonardo Nunes, Microsoft Research

Leonardo NunesDemo presentation: Real-time Event Detection in Video

Bio: Leonardo Nunes is a researcher with Microsoft´s Advanced Technology Labs in Brazil where he develops solutions for real-time understanding of video and audio signals. He has a D.Sc. from the Federal University of Rio de Janeiro in Electrical Engineering and his main research interest is in the intersection between machine learning and signal processing. His previous research areas include audio analysis, music information retrieval, sound source localization, and speech quality assessment. Dr. Nunes is a member of IEEE.

José F. L. de Oliveira, Federal University of Rio de Janeiro (UFRJ)

Jose OliveiraDemo presentation: Audio and Video Processing at the SMT Lab, UFRJ

Bio: José F. L. de Oliveira has graduated in Electrical Engineering (1994) from the Federal University of Rio de Janeiro and received M.Sc. (1997) and D.Sc. (2003) in Electrical Engineering from the Federal University of Rio de Janeiro. His research interests include signal processing, image compression and pattern recognition-tracking.

Witallo Oliveira, Pontifical Catholic University of Rio Grande do Sul (PUCRS)

Witallo OliveiraDemo presentation: EchoSense Project

Bio: Witallo Oliveira is an undergraduate student in computer engineering at the Pontifical Catholic University of Rio Grande do Sul.

Carlos Eduardo Pedreira, Federal University of Rio de Janeiro (UFRJ)

Carlos Eduardo PedreiraPresentation title: Computational Modeling in Medicine: Some Recent Results and Future Perspective

Abstract: In the last decades, there have been major technological advances in medical diagnosis and monitoring devices such as flow cytometers and magnetic resonance apparatus. These devices, now routinely used, have exponentially increased the ability to generate data. The resulting complexity in the datasets is challenging pre-existing data analysis and promoting the development of new algorithms and tools. A key challenge is concerned with how to intelligently process all this information. In this talk, we will expose some recent results, specially in flow cytometry generated data, and point out some of the present perspectives in medical data processing.

Bio: Prof. Carlos Eduardo Pedreira is with COPPE – Systems and Computing Engineering at the Federal University of Rio de Janeiro. Holds Bachelor (1975) and MSc degrees (1981) in electrical engineering from the Catholic University of Rio de Janeiro. Received a Ph.D. degree (1987) from Imperial College of Science, Technology and Medicine, University of London. He is a visiting researcher at the University of Salamanca, Spain since 2002. His articles have over 1000 citations (ISI), h-index = 13. He was the Founding President of the Brazilian Society of Neural Networks (presently Brazilian Society of Computational Intelligence). Member of the EuroFlow consortium board. Received the Santander Bank Award of Science and Innovation in 2006, and the Nicola Albano Prize (Brazilian Society of Pediatrics) in 2010.

Jonathan Protzenko, Microsoft Research

Jonathan ProtzekoDemo presentation: Micro:bit

Presentation title: The BBC micro:bit: a programming device for the new generation

Abstract: The BBC micro:bit is a small programmable device half the size of a credit card; it features 25 LEDs, buttons, an accelerometer, a compass, and Bluetooth capabilities. The device has been handed out for free to a million kids between 11 and 12 years old in the UK; Microsoft provided the programming environment, based on TouchDevelop. I will talk about the device, demo the programming environment, and discuss the global “CS literacy” trend, wherein more and more countries emphasize CS education.

Bio: Jonathan Protzenko is a researcher in the RiSE group at Microsoft Research in Redmond. His research interests revolve around type systems and programming languages design and implementation. In 2015, he worked with the BBC to deliver the micro:bit, a free programming device for a new generation of computer scientists.

Jaime Puente, Microsoft Research

Jaime PuenteBio: Jaime Puente is a director of academic outreach at Microsoft Research, responsible for strategic research engagements in Latin America and United States. Prior to joining Microsoft Research, Jaime spent 13 years as a professor in the School of Electrical and Computer Engineering at Escuela Superior Politécnica del Litoral (ESPOL) in Ecuador. Jaime Puente was a Fulbright Scholar for his early engagement with academia. His academic background includes an M.S. in Computer Engineering from Iowa State University, an MBA and an Electronics Engineering degree both from ESPOL in Ecuador, as well as an Educational Specialist post-master’s degree from NOVA Southeastern University in Florida, United States. Jaime Puente is currently a Ph.D. candidate in the College of Engineering and Computing at NOVA Southeastern University. His main research interests concern human-computer interaction and the pervasive integration of digital technologies in education.

Ricardo Sabedra, Federal University of Rio Grande do Sul (UFRGS)

Ricardo SabedraDemo presentation: EchoSense Project

Bio: Ricardo Stadtlober Sabedra, is an undergraduate computer engineering student at the Federal University of Rio Grande do Sul. Worked as an intern at the High Performance Computing Lab at PUCRS and actually is an intern of the Microsoft Innovation Center – Porto Alegre, Brazil. Worked with analysis of oceanic images from Petrobras, led a research about Openstack scheduler, mounted a low cost 3D printer and participated at the A. Richard Newton Young Student Fellow Program na Design Automation Conference (DAC) 2015 in San Francisco. Currently Ricardo is developing an augmented reality application for the Science and Technology Museum – PUCRS, and also the EchoSense project, a device assist and help the development of the senses for the visually impaired. In 2016 Ricardo was one of the National Finalists of the ImagineCUP at the Innovation category.

Frank Seide, Microsoft Research

Frank SeidePresentation title: CNTK: Microsoft’s Open-Source Deep-Learning Toolkit

Abstract: This talk will introduce CNTK, Microsoft’s cutting-edge open-source deep-learning toolkit for Windows and Linux. CNTK is a computation-graph based deep-learning toolkit for training and evaluating deep neural networks. Microsoft product groups use CNTK, for example to create the Cortana speech models and web ranking. CNTK supports feed-forward, convolutional, and recurrent networks for speech, image, and text workloads, also in combination. Popular network types are supported either natively (convolution) or can be described as a CNTK configuration (LSTM, sequence-to-sequence). CNTK scales to multiple GPU servers and is designed around efficiency. We will give an overview of CNTK’s general architecture and describe the specific methods and algorithms used for automatic differentiation, recurrent-loop inference and execution, memory sharing, on-the-fly randomization of large corpora, and multi-server parallelization. We will then discuss how typical uses looks like for relevant tasks like image recognition, sequence-to-sequence modeling, and speech recognition.

Bio: Frank Seide, a native of Hamburg, Germany, is a Senior Researcher at Microsoft Research. His current research focus is on deep neural networks for conversational speech recognition; together with co-author Dong Yu, he was first to show the effectiveness of deep neural networks for recognition of conversational speech. Throughout his career, he has been interested in and worked on a broad range of topics and components of automatic speech recognition, including spoken-dialogue systems, recognition of Mandarin Chinese, and, particularly, large-vocabulary recognition of conversational speech with application to audio indexing, transcription, and speech-to-speech translation. His current focus is Microsoft’s CNTK deep-learning toolkit.

Sunayana Sitaram, Microsoft Research

Sunayana SitaramDemo presentation: Project Melange: Translating Code-mixed Tweets

Bio: Sunayana is a Post Doc Researcher at Microsoft Research India, where she works on speech technologies for code-mixed languages under Project Melange. She holds PhD and MS degrees from the Language Technologies Institute, Carnegie Mellon University. Her PhD thesis was on building speech synthesizers for low-resource languages, and she was advised by Alan W Black. In addition, she worked on Intelligent Tutoring Systems, Spoken Dialog Systems and Speech Translation systems while at CMU.

Carlos Garcia Jurado Suarez, Microsoft Research

Carlos Garcia Jurado SuarezDemo presentation: Machine Learning for non-experts: Platform for Interactive Concept Learning (PICL)

Presentation title: Machine Learning made easy: Platform for Interactive Concept Learning (PICL)

Abstract: Machine Learning models give us the ability to capture human knowledge and replicate it at scale. Yet, building these models remains the domain of a few experts. What if we could enable everyone, regardless of their expertise, to create machine learning models? The focus of the Machine Teaching Group at MSR is to make the process of training a machine easy, fast and universally accessible. In this talk, you’ll learn about the Platform for Interactive Concept Learning (PICL), an interactive environment to build classifiers and entity extractors in a very short time and with minimal expertise.

Bio: Carlos Garcia Jurado Suarez is a Principal Engineering Manager at Microsoft Research Redmond, where he leads the development team in the Machine Learning Group. He received his B.S. degree in Physics from ITESM in Monterrey, Mexico and his M.S. degrees in Computer Science and Applied Math from the University of Washington. Prior to MSR, he was a software engineer for the Microsoft Visual Studio modeling tools. His research focus is on building systems for interactive machine learning.

Ivan Tashev, Microsoft Research

Ivan TashevDemo presentation: Spatial Audio for Augmented & Virtual Reality

Presentation title: Audio for Intelligent Devices

Abstract: Today’s intelligent devices are typically small: mobile or wearable. They usually do not have a screen, keyboard and mouse, and count on voice and audio as a primary input/output modalities, combined with gesture and limited number of buttons. Adding a microphone even to the smallest device is inexpensive and helps better to understand the environment. In addition, the modern devices are expected to work on the go, in noisier environment. In this talk we will cover recent advances and applications in audio signal processing algorithms for capturing, rendering, and understanding audio signals. They will be illustrated with examples from our work on Kinect, HoloLens, Windows, Cortana.

Bio: Dr. Ivan Tashev toke his Master’s degree in Electronic Engineering (1984) and PhD in Computer Science (1990) from the Technical University of Sofia, Bulgaria. He was Assistant Professor in the same university when in 1998 joined Microsoft. Currently Dr. Tashev is a Partner Architect and leads the Audio and Acoustics Research Group in Microsoft Research Labs in Redmond, USA. He has published four books, more than 70 papers, 30 US patents. Dr. Tashev created audio processing technologies incorporated in Windows, Microsoft Auto Platform, and Round Table device. He served as the audio architect for Kinect for Xbox and Microsoft HoloLens. Ivan Tashev is also affiliated professor in the Department of Electrical Engineering of University of Washington in Seattle, USA.

Jonathan Taylor, Microsoft Research

Jonathan TaylorDemo presentation: Interaction Through Hand Tracking

Presentation title: Hand Tracking

Abstract: In this talk, I will discuss a set of methods we have used recently for inferring the shape and pose of human hands from depth images. All of these methods use a generative model of human hand shape and pose to explain the data present in a set of depth images. The differences come down to the specific parameterization of this model and how the corresponding model fitting energy is optimized. A principled approach is to simply render our hand model, given a particular setting of parameters, and measure the discrepancy with the input depth image. This “golden energy” is not, however, easily differentiable making optimization challenging. Another option is to approximate this energy by instead measuring the distance from the data to the model surface in 3D. Through the use of a subdivision surface model, this energy can be made differentiable and amenable to gradient based optimization. Through various pairings of these energies with optimization strategies we are able to 1) build a low dimensional model of hand shape variation offline, 2) quickly “personalize” this model to a new user’s hand shape and 3) perform real time hand tracking using this “personalized” model.

Bio: Jonathan Taylor received his BSc degree from the University of Toronto and his MSc degree from McGill University. He completed his PhD thesis, at the University of Toronto, which presented his novel solution for recovering non-rigid structure from motion, a fundamental problem in computer vision. As first a postdoc and now a Researcher at Microsoft Research Cambridge, he has been leveraging machine learning to attack problems in deformable shape and pose inference. This work, which includes human body and hand tracking, is helping to open completely new paradigms of human computer interaction.

José Cláudio Terra, Hospital Israelita Albert Einstein

Jose Claudio TerraPresentation title: Innovation model, challenges and opportunities in a leading healthcare organization

Abstract: Einstein has developed a very open and collaborative innovation model. The presentation will focus on the key milestones, lessons learned, initial results and key principles related to the transformation of Einstein´s innovation strategy over the last two years.

Bio: Claudio Terra is director of innovation and knowledge management at Einstein. Prior to that he held executive positions in leading organizations in Brazil, USA and Canada. He was also a successful entrepreneur for 10 years until he sold his company to Globant, which did its IPO on Nasdad in 2014. He has completed his PhD in production engineering at University of São Paulo and attended advanced degrees in the US and Spain. Claudio has written 10 books that were published in Brazil and in the USA.

Gustavo Tutuca, State Secretary of Science, Technology & Innovation

Gustavo TutucaPresentation title: Opening ceremony – WATCH

Bio: Gustavo Reis Ferreira is one of the youngest and most active state representatives in Rio de Janeiro, re-elected with 64,248 votes. Gustavo is the son of the former mayor of Pirai, Arthur Henrique Gonçalves Ferreira, known as Tutuca. From his father, he inherited the nickname and a taste for public affairs. He graduated in Systems Analysis from University Estacio de Sá; Gustavo Tutuca practiced this profession at IBMEC and Cervejaria Cintra.

He entered into politics as the Municipal Secretary of Sports and Leisure for Piraí, and quickly achieved significant results. He was General Coordinator of the Digital Piraí Project, a national award-winning and internationally recognized effort for pioneering digital inclusion and democratization of access to information. The initiative received the backing of UNESCO and won the “Top Seven Intelligent Communities” prize. Another revolutionary project coordinated by Gustavo, which was considered an unprecedented achievement, was Piraí Digital Education which ensured the distribution of a notebook for each student and teacher in public schools.

Evelyne Viegas, Microsoft Research

Evelyne ViegasPresentation title: Artificial Intelligence perspectives at Microsoft

Abstract: Given the investment and evidence of progress in Artificial Intelligence (AI) in the last five years, some suggest that it is merely a matter of time until AI matches, complements or surpasses, human intelligence. Artificial Intelligence at Microsoft is about augmenting human abilities and experiences and having humans and machine collaborate as teams in a complementary and trustworthy fashion. In this talk I will expose the breadth of AI efforts at Microsoft, the need to build bridges across diverse communities to create new multimodal and interdisciplinary research efforts.

Bio: Evelyne Viegas is the Director of Artificial Intelligence Outreach at Microsoft Research, based in Redmond, U.S.A. In her current role, Evelyne is building initiatives which focus on information seen as an enabler of innovation, working in partnership with universities and government agencies worldwide. In particular she is creating programs around computational intelligence research to drive open innovation and agile experimentation via cloud-based services; and projects to advance the state-of-the-art in artificial intelligence and data-driven research including knowledge representation, machine learning and reasoning under uncertainty at scale.

Nicolas Villar, Microsoft Research

Nicolas VillarPresentation title: Project Torino: A physical programming language inclusive of blind children

Abstract: Torino is a physical programming language for teaching computational thinking skills and basic programming concepts to children age 7-11, regardless of level of vision. To this end, we followed an iterative design approach to develop and evaluate a novel hardware system that allows children to program through physical manipulation. Intended to promote the acquisition of important computational thinking skills, the technology is designed to be inclusive of children with mixed visual abilities, and to enable learning experiences that are imaginative, engaging and fun.

Bio: Nicolas Villar is a researcher at Microsoft Research, based in Cambridge, UK, where he co-leads the Connected Play group in the Human Experience and Design research area. His work is focused on the design and development of novel technologies, devices and systems that look to improve the experience of interacting and playing with technology, with a particular interest in the use of embedded systems – programmable microcontrollers, wireless communication devices, sensors and actuators – as building blocks in the design of physical interactive objects and devices.

Alex Wade, Microsoft Research

Alex WadeDemo presentation: Microsoft Academic

Presentation title: Natural Language Queries and Auto-Suggest over Knowledge Graphs 

Abstract: In web-scale search, prior user queries are typically used to provide query auto-completion suggestions. This works well for the most common ‘head’ queries, but less well for ‘tail’ queries, and not at all for never before seen queries. The Dialog Engine, developed at Microsoft Research and now deployed as a part of Bing and available as the Knowledge Exploration Service (KES) through Microsoft Cognitive Services, provides a complementary approach. Through the use of domain-defined grammars and efficient graph traversals, the KES system provides interpretations of natural language queries as well as the most likely query completion suggestions and refinements based on the data in the graph.

Presentation title: Microsoft Academic: New applications and research opportunities – WATCH

Abstract: The creation and use of knowledge graphs for information discovery, question answering, and task completion has exploded in recent years, but their application has often been limited to the most common user scenarios. The benefits of such models of human knowledge have not yet been fully realized within the domain of scholarship and research outputs, and Microsoft Research is determined to change the way that research information is discovered, analyzed, and exploited. The Microsoft Academic Graph is a new entity graph of research publications, authors, venues, organizations, and topics which is now driving new features in Bing, Cortana, and Microsoft Academic. In addition, Microsoft Research has opened up this dataset to the community new APIs to support further research, experimentation, and development. This talk will highlight how Microsoft is surfacing this information in novel ways, and how the research community can take advantage of these data and APIs to fuel new research opportunities.

Bio: Alex Wade is Director of Scholarly Communications at Microsoft Research, currently focused on Microsoft Academic (involving aspects of knowledge acquisition, knowledge representation, intentionality, dialog systems, semantic search and intelligent agents) and Microsoft Cognitive Services (including the Academic Knowledge API and Knowledge Exploration Service). During his career at Microsoft, Alex has managed Microsoft’s corporate intranet search services, has worked on Windows Search, and has implemented an Open Access policy governing Microsoft Research’s scholarly output.

Nathan Weibe, Microsoft Research

Nathan WiebePresentation title: Quantum Machine Learning

Abstract: Since Richard Feynman first sparked our imagination by proposing a quantum computer people have wondered if quantum computers could change the ways that we approach learning and inference. In recent years considerable excitement has coalesced around quantum machine learning as a major application for quantum computers alongside quantum simulation and cryptography. In this tutorial I will address the issue of how quantum technologies promise to disrupt the ways in which we approach learning. In particular, I will discuss how it will impact training deep neural networks, regression, clustering, big data problems and many other areas. This tutorial will require no previous exposure to quantum mechanics or advanced mathematics and aims to not only expose the audience to how these technologies work but also show how quantum ideas can inspire the development of new classical machine learning algorithms.

Bio: Nathan Wiebe is a researcher at MSR in the Quantum Architectures and Computing (QuArC) group. He is a leading researcher in the field of quantum machine learning and has been responsible for a number of important discoveries such as quantum algorithms for deep learning, Bayesian inference, clustering and also has invented the field of quantum Hamiltonian learning. Nathan Wiebe received his PhD in 2011 from the university of Calgary before moving to the university of Waterloo for his postdoctoral work and has been at Microsoft since 2013. Since then his work has been featured at TechFest, the Microsoft Faculty summit and at the NIPS workshop on quantum machine learning.

Cha Zhang, Microsoft Research

Cha ZhangDemo presentation: Emotion Recognition

Presentation title: Emotion Recognition from Images in the Wild

Abstract: Recognizing people’s emotions have many potential applications including advertising, gaming, autism intervention, personal assistant, etc. In this talk, I’ll present our effort in creating the Emotion API for images in the wild. I will discuss the challenges we faced, how we collected the data, and how to build an algorithm to estimate emotions from images. Emotion API is currently shipped as part of the Microsoft Cognitive Service.

Bio: Cha Zhang is a Principal Researcher in the Multimedia, Interaction and eXperience Group at Microsoft Research. He received the B.S. and M.S. degrees from Tsinghua University, Beijing, China in 1998 and 2000, respectively, both in Electronic Engineering, and the Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University, in 2004. His current research focuses on applying various audio/image/video processing and machine learning techniques to multimedia applications, in particular, multimedia teleconferencing. Dr. Zhang has published more than 80 technical papers and holds 20+ U.S. patents. He won the best paper award at ICME 2007, the top 10% award at MMSP 2009, and the best student paper award at ICME 2010. He currently serves as an Associate Editor for IEEE Trans. on Circuits and Systems for Video Technology, and IEEE Trans. on Multimedia.

Roy Zimmermann, Microsoft Research

Roy ZimmermanBio: Roy is a director in Microsoft Research. He leads strategic initiatives aimed at strengthening Microsoft’s institutional relationships with academia and other organizations. He has worked on education and outreach efforts anchored on state of the art hardware and software programs. Roy has 25 years’ experience working in education, international development and technology sectors and holds a PhD from UCLA.

Geoffrey Zweig, Microsoft Research

Geoffrey ZweigDemo presentation: Microsoft Cognitive Services

Presentation title: High Performance Image Captioning

Abstract: The problem of generating text conditioned on some sort of side information arises in many areas including dialog systems, machine translation, speech recognition, and image captioning. In this talk, we present a highly effective method for generating text conditioned on a set of words that should be mentioned. We apply this to the problem of image captioning by linking the generation module to a convolutional neural network that predicts a set of words that are descriptive of an image. The system placed first in the 2015 MSCoco competition on the Turing Test measure, and tied for first place overall.

Bio: Geoffrey Zweig is a Partner Research Manager at Microsoft Research, where he leads the Speech & Dialog Research Group. His work centers on developing improved algorithms for speech and language processing. Recent work has focused on applications of side-conditioned recurrent neural network language models, such as image captioning and grapheme to phoneme conversion. Prior to Microsoft, Dr. Zweig managed the Advanced Large Vocabulary Continuous Speech Recognition Group at IBM Research, with a focus on the DARPA EARS and GALE programs. In the course of his career, Dr. Zweig has written several speech recognition trainers and decoders, as well as toolkits for doing speech recognition with segmental conditional random fields, and for maximum entropy language modeling. Dr. Zweig received his PhD from the University of California at Berkeley. He is the author of over 80 papers, numerous patents, is an Associate Editor of Computers Speech & Language, and is a Fellow of the IEEE.

Michael Zyskowski, Microsoft Research

Michael ZyskowskiDemo presentation: Project Premonition

Presentation title: Project Premonition: Preventative Monitoring of Infectious Agents

Abstract: Project Premonition seeks to detect pathogens in animals before these pathogens make people sick. It does this by treating a mosquito as a device that can find animals and sample their blood. Project Premonition uses drones and new robotic mosquito traps to capture many more mosquitoes from the environment than previously possible, and then analyzes their body contents for pathogens. Pathogens are detected by gene sequencing collected mosquitoes and computationally searching for known and unknown pathogens in sequenced genetic material.

Bio: Mike manages a cross-discipline team of engineers in the development of research technologies into scalable, working solutions. He works with academic, industry and government/NGO collaborators to build partnerships and community ecosystems. Recently he has been focused on projects related to the confluence of Aerospace and Computer Science Engineering with projects like Windflow, Premonition and the Red Bull Air Races. He also leads Research News, an online news aggregation service for the Academic Research community.

Research Showcase

Demo Madness

Booth 1: Spatial Audio for Augmented & Virtual Reality

Presenter: Ivan Tashev & David Johnston

The exhibit demonstrates the advantages that spatial audio can provide for augmented and virtual reality scenarios, such as gaming, entertainment, and virtual presence. While human vision has a limited field of view (which is further restricted by the device itself), humans can hear and locate sound sources coming from 360 degrees (actually 4 pi steradians). We will demonstrate the abilities of spatial audio to complement vision and enhance the overall experience for AV/VR users. During the demo, attendees can wear an AR/VR device and play a short interactive game with spatial sound, vision, gesture, and voice, or look and listen around selected places where we have recorded 3D video and audio.

Booth 2: Microsoft Academic

Presenter: Alex Wade

We demonstrate a real-time system to recognize people’s emotion in a crowd. Such a system can be very useful in advertisement, education, medical applications, etc.

Booth 3: Microsoft Cognitive Services

Presenter: Geoff Zweig

Microsoft Cognitive Services let you build apps with powerful algorithms using just a few lines of code. They work across devices and platforms such as iOS, Android, and Windows, keep improving, and are easy to set up. These new APIs span areas of Vision, Speech, Language, Knowledge, and Search. The APIs in the Knowledge area enable developers to build semantic search features into their applications based upon custom content and domain-specific grammars. Come learn how to leverage the Entity Linking Intelligent Service (ELIS) to recognize and identify each separate entity in your text based on the context. The Knowledge Exploration Service (KES) can be used to add semantic search capabilities to your applications using data, schema, and domain-specific grammars defined by you.

Booth 4: Emotion Recognition

Presenter: Cha Zhang

We demonstrate a real-time system to recognize people’s emotion in a crowd. Such a system can be very useful in advertisement, education, medical applications, etc.

Booth 5: Microsoft Translator

Presenter: Will Lewis

Microsoft Translator builds on decades of natural language processing, machine learning and deep learning to help break down language barriers. Its phone apps allows you to translate text, images and even full conversations. Microsoft Translator’s speech translation service enables Skype users, through Skype Translator, to converse using their native language with other Skype users speaking in theirs. Further, the Microsoft Translator API exposes the text and speech translation features to anyone interested in building tools or apps that need text and speech translation. Wherever you use Microsoft Translator, thanks to the power of machine learning, it will continue to improve over time as more people use it across apps, services and devices.

Booth 6: NUI Graph

Presenter: Dave Brown

NUIgraph is a prototype Windows 10 app for visually exploring data in order to discover and share insight. The app has been designed for touch interaction, however a mouse can also be used. Data can be loaded from .csv files (exported from Excel). Once loaded, each row in the data is represented by a block on the screen. Blocks can be flexibly mapped to position, color and size using each column in the data, or arranged into stacks. In this way, multi-dimensional data can be explored to find patterns, which may lead to new insights.

Booth 7: Project Malmo

Presenter: Katja Hofmann

Project Malmo allows computer scientists to use the world of Minecraft as a testing ground for conducting research designed to improve artificial intelligence.

Booth 8: Project Premonition

Presenter: Mike Zyskowski

Project Premonition seeks to detect pathogens in animals before these pathogens make people sick. It does this by treating a mosquito as a device that can find animals and sample their blood. Project Premonition uses drones and new robotic mosquito traps to capture many more mosquitoes from the environment than previously possible, and then analyzes their body contents for pathogens. Pathogens are detected by gene sequencing collected mosquitoes and computationally searching for known and unknown pathogens in sequenced genetic material.

Booth 9: Audio and Video Processing at the SMT Lab, UFRJ

Presenters: Sergio Lima Netto with Lucas Maia and José Fernando L. de Oliveira

Two groups from the Signal, Multimedia, and Telecommunication Lab will demonstrate their most recent research.

The Audio Processing Group, will demonstrate applications related to its main research interest, including: audio signal modelling; automatic audio quality assessment; automatic music transcription; music information retrieval; sound source/sensor localization; sound source separation; audio coding; audio restoration; digital audio effects; singing voice processing; algorithmic music composition; and binaural generation of 3D sound.

The Image Processing Group will demonstrate a system for the detection of abandoned objects using a moving camera. The system operates on an industrial environment and compares a reference signal previously validated by the system operator with the newly acquired (target) video. Anomalies (object detection) are associated with image discrepancies in consecutive video frames. Solutions are proposed/discussed for the system to operate in real time.

Booth 10: Micro:bit

Presenter: Jonathan Protzenko

The BBC micro:bit is a small programmable device half the size of a credit card; it features 25 LEDs, buttons, an accelerometer, a compass, and Bluetooth capabilities. The device has been handed out for free to a million kids between 11 and 12 years old in the UK; Microsoft provided the programming environment, based on TouchDevelop. I will talk about the device, demo the programming environment, and discuss the global “CS literacy” trend, wherein more and more countries emphasize CS education.

Booth 11: Machine Learning for non-experts: Platform for Interactive Concept Learning (PICL)

Presenter: Carlos Garcia Jurado Suarez

Building classifiers and entity extractors is not new. The efficacy of current approaches, though, is limited by the number of machine-learning experts and programmers and by the complexity of the tasks. The Platform for Interactive Concept Learning (PICL) enables interactive, iterative machine learning with big data for non-experts. PICL makes it easy to build classifiers and extractors in hours. Users can build a classifier or extractor by labeling a few examples, adding features, and verifying system predictions. The ability to produce thousands of high-quality classifiers and extractors can be valuable for applications such as search, advertising, email, and mobile.

Booth 12: Real-time Event Detection in Video

Presenter: Leonardo Nunes

Real-time event detection in video streams will be shown for different scenarios, including urban mobility and public safety. The demonstration will highlight the lightweight aspect of the solutions proposed as well as their capability to run as an Azure service for several video streams.

Booth 13: EchoSense Project

Presenter: Ricardo Sabedra and Witallo Oliveira

The EchoSense project is a wearable device made to assist with mobility and sense development for people with visual impairments. This project was chosen as one of the Brazilian national finalists in the Innovation category of the Microsoft ImagineCup. This device uses sensors and vibration motors to provide tactile information to users, enabling them to perceive with precision where obstacles are, without the need to touch them.

Booth 14: Ability Eye Gaze

Presenter: Jon Campbell

The MSR Enable group focuses on creating technologies to help restore capabilities to people living with disabilities. With a specific focus on ALS (also known as MND, or Lou Gehrig’s Disease), our team is producing advancements in the areas of natural communication and independent mobility.

Booth 15: Open Source Software

Presenter: Judith Bishop

Open source is a powerful way of advancing software development. Microsoft has open sourced over fifty cutting-edge research projects as well as key software such as .NET, and has classified it especially for academics. Many of the projects are cross-platform and also run in browser versions. We’ll demonstrate how to get to the software best suited for your research and teaching needs.

Booth 16: Project Melange: Translating Code-mixed Tweets

Presenter: Sunayana Sitaram

Code-mixing is the alteration between two or more languages at the sentence, phrase, word, or morpheme level and is prevalent in multilingual societies all over the world. We demonstrate a system for the machine translation of code-mixed text in several languages. We first perform word-level language detection and matrix language identification. We then use this information and an existing translator in order to translate code-mixed tweets into a language of the user’s choice.

Booth 17: Interaction Through Hand Tracking

Presenter: Jonathan Taylor

How would truly robust and accurate hand-tracking technology transform the way we interact with our devices? Take a glimpse of such a future through a number of exciting new user experiences. See your hands appear as avatars, allowing you to play a virtual piano or interact with virtual objects as if they were physical.

Co-located Workshop

Azure for Research Training

May 17, 2016 | Hilton Barra Rio de Janeiro, Brazil

This hands-on lab is offered to university students who are using large data sets to conduct research. This lab will help participants acquire an understanding of cloud computing with Microsoft Azure at scale, including an overview of services that enable powerful predictive analytics relevant to any research domain involving cloud-based data analytics. The one-day training will be delivered through a scenario-based program. Registration for the workshop is by invitation only.

Training details

  • Please bring your own laptop. You will be able to access Microsoft Azure on your own laptop during the training and for evaluation purposes for up to one month after the event. Your laptop does not need to have the Windows operating system installed—Microsoft Azure is accessed via your Internet browser so any operating system will be compatible.
  • This course is suitable for research oriented students using any language, framework, or platform. This includes Linux, Python, R, MATLAB, Java, Hadoop, STORM, SPARK, and Microsoft technologies such as C#, F#, Microsoft .NET, Microsoft Azure SQL Database, and various Microsoft Azure services.
  • Some basic exposure to cloud computing is helpful, but not required. A greater understanding of cloud computing will be the result of this one-day class.
  • Learning outcomes include:
      • Gaining an understanding of cloud computing and why and when you would use it in scientific or other research
      • Acquiring hands-on experience in the major design patterns for successful cloud applications
      • Developing the skills to run your own application/services on Microsoft Azure

Participants will also receive information about applying for Azure for Research computing awards.

Information for attendees

Date: May 17, 2016
Time: 9:00 A.M. to 5:00 P.M. Registration opens at 8:30, class begins at 9:00
Location: Hilton Barra Rio de Janeiro Hotel
Building/room: Nogueira I (Lower Lobby)
Address: 1430 Abelardo Bueno Avenue, Rio de Janeiro, RJ

Microsoft will provide lunch. You will need to make your own travel arrangements to arrive at the Hilton Barra by 8:30 am for check-in.

On Demand

Watch the 2016 Latin American Faculty Summit sessions on demand