Faculty Summit 2010

Home

About Faculty Summit

Microsoft Research Faculty Summit 2010The eleventh annual Microsoft Research Faculty Summit brought together more than 400 thought leaders from academia, government, and Microsoft to reflect on how current computing disciplines open new opportunities for research and development. Faculty Summit 2010 investigated compelling research topics such as Architectures of the Future, Natural User Interaction, Future Web / Web 4.0, and Accelerating Science.

News Highlights

Design Expo 2010Design Expo 2010: Where Service Meets Social
Student teams from six top graduate design institutions showcased their prototype interaction-design ideas in this year’s Microsoft Research Design Expo. The teams’ design solutions reflect the theme “Service meets Social,” which focuses on the marriage of exceptional process and ideas. Learn more >

Faculty FellowsFaculty Fellows for 2010
Microsoft Research recognizes seven new outstanding faculty members as the 2010 Faculty Fellows. These individuals were nominated by their universities and represent a selection of the best and brightest in their fields. The selected professors are exploring breakthrough, high-impact research that has the potential to help solve some of today’s most challenging problems. Learn more >

WorldWide TelescopeWorldWide Telescope Reveals Mars in 3-D
The latest version of the WorldWide Telescope has unveiled the largest, seamless spherical map ever made of the night sky. Provided through an on-going collaborative relationship with NASA, Microsoft has also released the most complete pole-to-pole coverage of Mars images available allowing WorldWide Telescope users to experience Mars in 3-D.

About

Embracing Complexity

About Microsoft Research Faculty Summit 2010Worldwide, computing power is dramatically transforming our experiences in many domains. In the sciences, huge data sets help us to understand global climate change and devise mitigating strategies. However, abundant data is virtually useless without sophisticated analysis. We must therefore develop advanced algorithms and program complex next-generation multiprocessors to run them.

The accumulation of vast amounts of knowledge is fundamentally transforming problem solving. Lack of documentation is no longer the obstacle—more is now published than can ever be catalogued—even data curation is now a computational science. As we become more connected to our computers, online services quickly evolve to serve our needs while search is transforming into a much more meaningful, contextually-based form of information retrieval.

The Microsoft Research Faculty Summit 2010, through lively, creative, and open discourse, investigated these as well as a number of other compelling research topics, such as:

  • Architectures of the Future. As computing takes place increasingly in the cloud, we connect via the Internet to vast online data centers. How should we design these? How do we keep them secure? How do we actually program in the cloud? Does this require a new type of software engineering?
  • Natural User Interaction. Emerging technologies will help users manage complexity and interact with computers more intuitively. Advances in vision and perception, gesture and other interaction modalities, real-time natural language processing, and integrative intelligence are leading to systems that anticipate user intent rather than just react.
  • Future Web / Web 4.0. The web has fundamentally transformed the infrastructure of information exchange by democratizing the semantics of information and social connections. How do we innovate to synthesize knowledge from information? Can interconnected scale and ubiquity, coupled with immense computing power, achieve this? Is it time to embrace intelligent and trusted agents at scale?
  • The Challenge of Large Data. To embrace the sweeping changes affecting technical productivity, we will need innovative new platforms for Environmental Science, Astronomy, and nearly every other discipline. Advanced new platforms and visualization tools are now essential for effectively and intelligently processing and analyzing the enormous amounts of data available to the all the sciences.

Background on the Microsoft Research Faculty Summit

Each year, Microsoft Research hosts an annual faculty research summit. Leading academic researchers and educators join with Microsoft researchers to explore the latest research results, collectively discuss the challenges faced by the community, search for the best approaches to meeting those challenges, and identify new research opportunities. The participants’ range of interests and the breadth of the technical areas covered in the program ensure a unique experience and provide a venue for meeting with colleagues and friends across the full range of the computing disciplines.

Agenda

Monday, July 12

Time Event/Topic Location
8:00–9:00
Breakfast Hood
9:00–10:30
Opening Plenary Sessions Kodiak
9:00–9:30
Welcome and Introduction | slides

Tony Hey, Corporate Vice President, Microsoft Research

9:30–10:30
Kinect for Xbox 360—The Innovation Journey | slides
Andrew Fitzgibbon, Principal Researcher, Microsoft Research, Cambridge; Kudo Tsunoda, Creative Director – Kinect, Microsoft

Kinect brings games and entertainment to life in extraordinary new ways—no controller required. Easy to use and instantly fun, Kinect (formerly known as “Project Natal”) gets everyone off the couch. Want to join a friend in the fun? Simply jump in. And the best part is Kinect works with every Xbox 360. When technology becomes invisible and intuitive something special happens—you and your experience become one. No barriers, no boundaries, no gadgets, no gizmos, no learning curves. With Kinect you are the controller. It’s just the magic of you—your movement, your voice, your face, all effortlessly, naturally and beautifully transforming how you play and experience entertainment. This session introduced Kinect and explained the technology behind it.

10:30–10:45
Break
10:45–12:15
Breakout Sessions
Natural User Interaction
Visualization and Interaction Today—Selected Perspectives

Session Chair: Mary Czerwinski, Microsoft Research

Rob Deline, Microsoft Research
Steven Drucker, Microsoft Research
Danyel Fisher, Microsoft Research | slides
Jeffrey Heer, Stanford University
George Robertson, Microsoft Research | slides

Information visualization lets users make sense of data visually, and applies across fields and areas. We have invited five researchers to discuss current work in information visualization: Rob DeLine (MSR) discussed applying visualization to source code, and Danyel Fisher (MSR) discussed visualization in a NUI context. George Robertson (MSR) gave an overview of how animation and data visualization can work together. Jeff Heer (Stanford) discussed the Protovis toolkit, and Steven Drucker (MSR) discussed the WebCharts toolkit. Together, these talks gave an overview of the newest work in visualization, the broad applicability of its techniques, and provided starting points for researchers and practitioner who might apply visualization to their own projects.

Cascade
Data–Driven Software Engineering
Software Ecosystems: A New Research Agenda
Session Chair: Judith Bishop, Microsoft Research

Anthony Finkelstein, University College London | slides
Fred Wurden, Microsoft | slides

The software development scene is transforming from unitary system, through component market places to supply chains, and now increasingly complex ecosystems of interoperating, systems, services, and environments held together by networks of partnerships and commercial relationships. Anthony Finkelstein set out a research agenda for work in this new setting. In particular, it calls for empirical research and suggests some ways in which it can be conducted. Some early data is discussed. Fred Wurden followed with an overview of the recent efforts of more than 500 engineers at Microsoft aimed directly at increasing the interoperability of Windows with open source and commercial software ecosystems.

Rainier
Future Web: Intelligence, Ubiquity, and Trust
Bing Dialog Model: Intent, Knowledge, and User Interaction | slides
Session Chair: Evelyne Viegas, Microsoft Research
Yu-Ting Kuo, Microsoft; Harry Shum, Microsoft; Kuansan Wang, Microsoft Research

With Internet users growing ever more sophisticated, the decade-old search outcomes, manifested in the “ten blue links,” are no longer sufficient. Many studies have shown that when users are ushered off the conventional search result pages, their needs are often only partially met. To tackle this challenge, we optimize Bing, Microsoft’s decision engine, to not just navigate users to a landing page through a blue link but to continue engaging with users to facilitate task completion. Underlying this new paradigm is the Bing Dialog Model that consists of three building blocks: an indexing system that systematically and comprehensively harvests task knowledge from the web, an intent model that statistically infers and matches users’ needs to the task knowledge, and an interaction model that elicits user intents through mathematically optimized presentations. In this talk, I’ll describe Bing Dialog Model in details and demonstrate it in action through the innovative features recently introduced in Bing.

St. Helens
The Challenge of Large Data
Environmental Data Management
Deb Agarwal, University of California, Berkeley | slides

Environmental scientists have been building rich networks of measurement sites that span a wide range of ecosystems and environmental conditions. Each measurement site is put in place by a science team to pursue specific science goals. These science teams now also work together to contribute their data to national and international research networks. This data, once brought together, has the potential to enable studies of spatial and temporal scales that are not possible at a single site. It also has the potential to allow researchers to discern large-scale patterns and disturbances in the combined data. The challenge in bringing together environmental data into a common data set for researchers to use is one of heterogeneity not scale. Informatics is critical to managing, curating, and archiving these data for the future, making it accessible in a form that it can be used and interpreted accurately, and producing answers to questions from a community of researchers, policy makers, and educators. Some of the networks addressing this challenge include the Long Term Ecological Research Network, the FLUXNET network, and the National Soil Carbon Network. This talk explored the challenges involved in managing environmental data and in developing informatics infrastructure to enable researchers to easily access and use regional- and global-scale data to address large-scale questions such as climate change and publishing the analysis results.

William Michener, University of New Mexico | slides

Large data do not currently present a central challenge for the environmental sciences. Instead, the big challenges lie in discovering data, dealing with extreme data heterogeneity, and converting data to information and knowledge. Addressing these challenges requires new approaches for managing, preserving, analyzing, and sharing data. In this talk, I first introduce DataONE (Data Observation Network for Earth), which represents a new virtual organization that will enable new science and knowledge creation through universal access to data about life on earth and the environment that sustains it. DataONE is poised to be the foundation of innovative environmental science through a distributed framework and sustainable cyberinfrastructure that meets the needs of science and society for open, persistent, robust, and secure access to well-described and easily discovered Earth observational data. Second, I briefly summarize several cyberinfrastructure (CI) challenges related to metadata creation, data provenance, and scientific and visualization workflows that impede science. Finally, I relate these and other CI challenges to three specific case studies: (1) understanding the world’s biodiversity, (2) conserving elephants in Africa, and (3) assessing environmental risk.

Baker
Special Topics
Cutting Edge Education Update
Jesse Schell, Carnegie Mellon University | slides
Andrew Phelps, Rochester Institute of Technology | slides

Jesse Schell talked about exciting trends in both technology and culture that define the 21st century: things are becoming more beautiful, more customized, more shared, and more authentic. But is the same true for education? Jesse shows examples of how it is beginning to happen (such as achievement systems, connecting with Andy’s talk), but that there is a long way to go, and to fully realize this vision, some significant technology driven educational revolution is necessary.

Andrew Phelps discussed curricular trends in the Department of Interactive Games & Media at RIT, and in particular a set of initiatives that the department is planning around achievement systems, social networks, and student culture. Game design offers us lessons in the success of such systems in a certain context, but also comes replete with dramatic failures, relevant warnings, and a few emerging best practices. Can these systems be utilized in an educational setting, and even if they can, should they? Could these tools be utilized towards goals of curricular customization and student engagement? This portion of the talk focused on the preparations, planning, and thoughts of the IGM faculty as they begin to establish a research agenda in this space—what can we hope to learn?

After these brief presentations, Jesse and Andy engaged the audience in further discussion about how technology, design, and cultural changes can best be applied to the future of education.

Lassen
12:15–1:15
Lunch
Hood
12:15–1:15 Lunchtime Sessions
Design Mind + Engineering Mind: Secrets to Designing Compelling Product Experiences | slides
Surya Vanka, Microsoft

Today, the nature of products is changing fast. Most products from phones to appliances to automobiles are a combination of software and services encased in hardware. These products are novel, dynamic, and content-laden. Their interaction often spans multiple platforms (hardware, application, web), multiple form factors (desktop, mobile, television) and multiple interfaces (keyboard, pointer, voice, touch, gesture). How do you make sure that it is real human needs and not the multitude of technologies that shape the experience of these products? In this talk, Surya Vanka described how breakthrough product experiences are created at Microsoft by employing a combination of the design mind and the engineering mind. The design mind’s ability to leapfrog established patterns and paradigms, and the engineering mind’s ability to optimize and actualize, are the foundations of great individual and team processes. Surya shared principles, practices, collaborations, and thoughts on organizational culture.

Rainier
Memento: Time Travel for the Web | slides
Michael L. Nelson, Old Dominion University

The web is ephemeral. Many resources have representations that change over time, and many of those representations are lost forever. A lucky few manage to reappear as archived resources that carry their own URIs. For example, some content management systems maintain version pages that reflect a frozen prior state of their changing resources. Archives recurrently crawl the web to obtain the actual representation of resources, and subsequently make those available via special-purpose archived resources. In both cases, the archival copies have URIs that are protocol-wise disconnected from the URI of the resource of which they represent a prior state. Indeed, the lack of temporal capabilities in HTTP prevents getting to an archived resource on the basis of the URI of its original. This turns accessing archived resources into a significant discovery challenge for both human and software agents, which typically involves following a multitude of links from the original to the archival resource, or of searching archives for the original URI. The Memento solution is based on existing HTTP capabilities applied in a novel way to add the temporal dimension. The result is an inter-archive framework in which archived resources can seamlessly be reached via their original URI: protocol-based time travel for the web.

Lassen
1:15–2:45
Breakout Sessions
Natural User Interaction
Beneath the Surface—Introduction | slides
Daniel Wigdor, Microsoft

The notion of a “Natural” user interface is a well-defined design goal, which can be targeted, built towards, tested, and used to evaluate to enable iteration. In this presentation, I describe some of the misunderstandings of this goal, with a particular emphasis on the notion that it is a means, rather than a goal for design. I also describe some of the work done on the Surface team to better define it and utilize it as a tool to achieve good design.

Beneath the Surface Projects | slides
Mark Bolas, University of Southern California; Steve Feiner, Columbia University

Professor Feiner presented recent work performed by the Computer Graphics and User Interfaces Lab at Columbia University for the Beneath the Surface project sponsored by Microsoft Research.

Professor Bolas presented recent work performed by the USC Interactive Media Division for the Beneath the Surface project, with focus toward Creative Production Environments, sponsored by Microsoft Research.

Cascade
Data-Driven Software Engineering
Code Contracts and Pex: Infrastructure for Dynamic and Static Analysis for .NET | slides
Session Chair: Tom Ball, Microsoft Research
Mike Barnett, Microsoft Research; Christoph Csallner, University of Texas at Arlington; Peli de Halleux, Microsoft Research

We present two complementary platforms for teaching and research involving the static and dynamic analysis of .NET programs. Code Contracts is a platform that provides a standardized format for expressing program specifications. Tools using the CCI infrastructure utilize the contracts for performing runtime verification, static analysis, and documentation generation. Pex is a platform for dynamic symbolic execution. On top of Pex, tools have been created that do advanced test case generation, reverse engineering, data structure repair, and database testing. Both platforms are extensible and can be leveraged by researchers to build their own tools.

Rainier
Future Web: Intelligence, Ubiquity, and Trust
Privacy and Trust in Future Web | slides
Session Chair: Evelyne Viegas, Microsoft Research

This session describes work on providing privacy guarantees for dynamically changing data and addresses how to deliver end-to-end trust.

Privacy of Dynamic Data: Continual Observation and Pan Privacy | slides
Moni Naor, Weizmann Institute of Science

Research in the area of privacy of data analysis has been flourishing recently, with the rigorous notion of differential privacy defining the desired level of privacy as well as sanitizing algorithms matching the definition for many problems. Most of the work in the area assumes that the data to be sanitized is fixed. However, many applications of data analysis involve computations of changing data, either because the entire goal is one of monitoring, e.g., of traffic conditions, search trends, or incidence of influenza, or because the goal is some kind of adaptive optimization, e.g., placement of data to minimize access costs. Issues that arise when providing guarantees for dynamically changing data include:

  • How to provide privacy even when the algorithm has to constantly output the current value of some function of the data (Continual Observation).
  • How to assure privacy even when the internal state of the sanitizer may be leaked. This is called Pan Privacy. We aim to design algorithms that never store sensitive information about individuals, so in particular collectors of confidential data cannot be pressured to permit data to be used for purposes other than that for which they were collected.

(Based on joint papers with Cynthia Dwork, Toni Pitassi, Guy Rothblum, and Sergey Yekhanin.)

Delivering End to End Trust: Challenges, Approaches, Opportunities | slides
Jeffrey Friedberg, Microsoft

As people, businesses, and governments connect online, new valuable targets are created spawning greater cybercrime. How can we reduce the risk and increase accountability while preserving other values we cherish such as personal freedoms and anonymity? What building blocks are needed? What are the biggest gaps? The latest efforts to address this challenge was discussed. Research into improving the usability of privacy and security features was also presented.

St. Helens
The Challenge of Large Data
Dataset Citation, Curation, and Management

Merce Crosas, Harvard University
Liz Lyon, UKOLN/University of Bath | slides
John Wilbanks, Science Commons | slides

The Dataverse Network | slides
Merce Crosas, Harvard University

The Dataverse Network is an open-source web application which offers a free and flexible framework for dataset citation, curation and management. This talk presents a series of examples showing how an individual dataverse can be used by researchers, journals, archives and others who produce or organize data. In particular, a dataverse increases scholarly recognition, controls distribution of datasets, secures formal citations for data, provides legal protection and ensures long-term preservation.

UK Digital Curation Centre: Enabling Research Data Management at the Coalface—Liz Lyon, UKLON/University of Bath

The UK Digital Curation Centre (DCC) is providing advocacy, guidance and tools for research data management to the UK higher education community, as well as running a portfolio of R&D projects to understand data curation challenges at the coalface. This session looked at three DCC data exemplars: 1) data citation of complex predictive network models of disease, 2) crystallography data flows across institutional borders from laboratory to synchrotron and 3) Emerging Data Management Planning tools for institutions and faculty.

John Wilbanks, Science Commons

Scientific research has so far shown significant resistance to adopting the kinds of “generative” effects we’ve seen in networks and culture. Most of the resistance is systemic – emerging from the institutions that host research, the cultures of scientific publication and reward, the lack of infrastructures to make data and tools easy to transfer and master, and the trend towards micro-specialization of disciplines. However, some interventions from the cultural and software world can be “localized” to create an increased tendency towards generativity, and there is evidence of early success. Now it’s important to begin questioning the interventions and analyzing the potential for the “stall” that can follow a generative system’s emergence, particularly in the interim phase between the sharing of data and the deployment of the infrastructure that makes sharing as powerful as web browsing.

Baker
Special Topics
Computational Science Research in Latin America | slides
Session Chair: Jaime Puente, Microsoft Research

Carlos Alfredo Joly Universidade Estadual de Campinas; Ricardo Vencio, Universidade de São Paulo; Celso Von Randow, Instituto Nacional de Pesquisas Espaciais

SinBIOTA 2.0: New Challenges for a Biodiversity Information System—Carlos Alfredo Joly Universidade Estadual de Campinas

In the last three decades. many initiatives have been developed aiming to fill gaps in global knowledge about biodiversity and to facilitate access to data. The Catalogue of Life, that is becoming a comprehensive catalogue of all known species of organisms on Earth has now 1.1 million species registered, GBIF—The Global Biodiversity Information Facility, provides up till now access to 190 million species occurrence records. However, this global initiatives lack specific tools and applications to assist decision makers, and the current rate of species extinction is far from reducing as aimed by the Convention on Biological Diversity with the 2010 targets.

In March 1999, the State of Sao Paulo Research Foundation (FAPESP) launched the Research Program on Characterization, Conservation, Restoration, and Sustainable Use of the Biodiversity of the State of Sao Paulo, also known as BIOTA/FAPESP: The Virtual Institute of Biodiversity.

SinBiota, the Environmental Information System of the BIOTA/FAPESP Program, was developed to store information generated by researchers involved with the program. In addition, the system integrates this information with a digital cartographic base, thus providing a mechanism for disseminating relevant data on Sao Paulo State’s biodiversity to the scientific community, educators, governmental agencies, and other decision and policy makers.

Between 2006 and 2008, BIOTA-FAPESP researchers made a concerted effort to synthesize data for use in public-policy-making. Scientists worked with the state secretary of the environment and nongovernmental organizations (NGOs) such as Conservation International, The Nature Conservancy, and the World Wildlife Fund. The synthesis was based on more than 151,000 records of 9405 species (table S1), as well as landscape structural parameters and biological indices from over 92,000 fragments of native vegetation. Two synthesis maps, identifying priority areas biodiversity conservation and restoration, together with other detailed data and guidelines have been adopted by São Paulo state as the legal framework for improving public policies on biodiversity conservation and restoration, such as prioritizing areas for forest restoration (as one means of reconnecting fragments of native vegetation) and selecting areas for new Conservation Units. There are four governmental decrees and 11 resolutions that quote the BIOTA-FAPESP guidelines.

In June 2009, more than 300 scientists and students associated the BIOTA/FAPESP Program or to biodiversity in general, discussed priorities and an agenda for the next ten years of the Program. As a result, a Science Plan & Strategies document was drafted, revising the goals of the original proposal. Furthermore, 10 critical points have been elected as top priorities for the next ten years, and one of them is the evolution of the SinBiota information system.

In December 2009, FAPESP approved the new Science Plan & Strategies document, renewing its support to the Program up to 2020.

In a project funded by Microsoft Research and FAPESP, we are developing the new Biodiversity Information System SinBiota 2.0 that will incorporate new tools and interfaces, aiming to fulfill the expectations of the research community and decision makers in the next 10 years. It might also be used as template for other regions and for the Brazilian SISBIOTA planed by the National Research Council/CNPq.
Information Technology Applied to Bioenergy Genomics—Ricardo Vencio, Universidade de São Paulo
There is no doubt that one of the greatest challenges to mankind on this century is energy production. For geopolitical, economical and, most pressing, environmental reasons, no ordinary form of energy production is the solution, making renewable and environmentally-friendly options mandatory. All major global economies are organizing themselves to tackle this issue and Brazil is no exception. Since it is recognized that biofuels may be part of the solution to such pressing problem, São Paulo State, the main biofuel producer in Brazil, launched an aggressive research program called BIOEN – FAPESP Program for Research on Bioenergy. There are several scientific and technological goals in this program related to environmental impacts, social impacts, next-generation fuels, production technology and so on.
Network of Environmental Sensors in Tropical Rainforests—Celso Von Randow, Instituto Nacional de Pesquisas Espaciais

The interaction between the Earth’s atmosphere and the terrestrial biosphere plays a fundamental role in the climate system and in biogeochemical and hydrological cycles, through the exchange of energy and mass (for example, water and carbon), between the vegetation and the atmospheric boundary layer, and the main focus of many environmental studies is to quantify this exchange over several terrestrial biomes.

Over natural surfaces like the tropical forests, factors like spatial variations in topography or in the vegetation cover can significantly affect the air flow and pose big challenges for the monitoring of the regional carbon budget of terrestrial biomes. It is hardly possible to understand the air flow and reduce the uncertainties of flux measurements in complex terrains like tropical forests without an approach that recognizes the complexity of the spatial variability of the environmental variables.

With this motivation, a partnership involving Microsoft Research, Johns Hopkins University, University of São Paulo and Instituto Nacional de Pesquisas Espaciais (INPE, the Brazilian national institute for space research) has been developing research activities to test the use of prototypes of environmental sensors (geosensors) in the Atlantic coastal and in the Amazonian rain forests in Brazil, forming sensor networks with high spatial and temporal resolution, and to develop software tools for data quality control and integration. The main premise is that the geosensors should have relatively low cost, what enables the formation of monitoring networks with a large number of sensors spatially distributed.

Envisioning a possible wide deployment of geosensors in Amazonia in the future, the team is currently working on three main components: 1) assembly and calibration of prototypes of geosensors of air temperature and humidity, with reproductive and reliable ceramic sensor elements that will adequately operate under the environmental conditions observed in the tropics; 2) development of software tools for management, quality control, visualization and integration of data collected in geosensor networks; and 3) planning of an experimental campaign, with the installation of the first tens to hundreds of sensors in an Amazonian forest site, aiming at a pilot test of the system to study the spatial variability of temperature and humidity within and above the rainforest canopy.

Lassen
1:15–4:15 Design Expo | slides Kodiak
2:45–4:15 Breakout Sessions
Natural User Interaction
A Whole NUI World: A New Fantastic Point of View
Session Chair: Desney Tan, Microsoft Research
Scott Hudson, Carnegie Mellon University | slides

Rich new sensors are noisier than the more mature and highly focused input devices we are used to working with. Further, the recognition needed to deal with rich new classes of user actions introduces additional uncertainty. (Really good recognizers are wrong 1 in 20 times!) Technology in these areas may progress. But even if we were to somehow make these technologies nearly perfect tomorrow, human behavior is full of ambiguity. And, while we sometimes think of that ambiguity as a kind of flaw, in fact it often serves important, sometimes vital purposes in human-to-human interaction. So if you want to think seriously about “natural” interaction then you inevitably must deal with ambiguity and uncertainty. In this presentation I talk briefly about the challenges and directions for new research that this perspective implies.

Johnny Lee, Microsoft | slides

Thanks to Moore’s Law, the form factors of computing devices today are dominated by the interface hardware. The computing industry has also identified it as a major product differentiating feature. However, the notion that there will be a broad reaching new interface technology that will alter the way we interact with all computing is a counter-productive myth. As specialized and diverse devices become increasingly economical to produce, the interfaces will also become increasingly specialized and diverse. What will be intuitive and “natural” is the result of a good pairing between applications and interface capabilities and, ultimately, good design.

Michael Medlock, Microsoft | slides

The folks in this room have made or will make great things. But many of these great things will never see the light of day. Why? There are lots of reasons. I’ll give you some of the reasons that I get to see up close and personal while working on many different kinds of products in a big company like Microsoft. I focus on user experience reasons…but I touch on business and technology reasons too.

Dan Morris, Microsoft Research | slides

As computing progresses toward being smaller and more readily available in more scenarios, we pay an increasingly high price for the physical devices on which we’ve become dependent for input: buttons, touch screens, etc. We propose that the use of on-body sensors for computer input will allow us to make a critical leap toward always available computing, and in this talk I discuss some of our work in this space. Perhaps even more interesting than input modalities, though, is the implications that “always-available computing” has on the applications we can build. Consequently, I look forward to discussing—with the audience and the panel—the new application spaces we can create as we approach the long-awaited natural user interface.

Daniel Wigdor, Microsoft Research | slides

The notion of a “Natural” user interface is a well-defined design goal, which can be targeted, built towards, tested, and used to evaluate to enable iteration. In this presentation, I describe some of the misunderstandings of this goal, with a particular emphasis on the notion that it is a means, rather than a goal for design. I also describe some of the work done on the Surface team to better define it and utilize it as a tool to achieve good design.

Cascade
Data-Driven Software Engineering
A New Approach to Concurrency and Parallelism
Session Chair: Judith Bishop, Microsoft Research

Tom Ball, Microsoft Research | slides
Ade Miller, Microsoft | slides

With Visual Studio 2010, Microsoft released new libraries and languages for high-level programming of multi-core systems. We looked at these extensions from the point of view of parallel patterns that can improve an application’s performance on multicore computers, as well as correctness concerns. The speakers have been incorporating their work into a book and a set of courseware, both of which will be available in the fall. See A Guide to Parallel Programming and Practical Parallel and Concurrent Programming. Sebastian Burckhardt and Madan Musuvathi assisted with this session, and there was an associated booth in the DemoFest.

Rainier
Future Web: Intelligence, Ubiquity, and Trust
Latest Advances in Bing Maps User Experience and Ecosystem—Eyal Ofek, Microsoft; Greg Schechter, Microsoft

Online mapping represents a great opportunity to organize and present an incredible variety of information having spatial characteristics. Come see and hear about where we’re at with Bing Maps here, some of the experiences available, our approach to encouraging third-party application development, and learn more about various technologies in use in Bing Maps. And, of course, lots of demos along the way.

St. Helens
The Challenge of Large Data
Azure for Science Research: from Desktop to the Cloud | slides
Roger Barga, Microsoft Research; Catharine van Ingen, Microsoft Research

We live in an era in which science discovery is increasingly driven by data exploration and often occurs within a data explosion. Scientists today are envisioning diverse data analyses and computations that scale from the desktop to supercomputers, yet often have difficulty designing and constructing software architectures to accommodate the heterogeneous and often inconsistent data at scale. Moreover, scientific data and computational resource needs can vary widely over time. The needs can grow as the science collaboration broadens or as additional data are accumulated; the needs can have large transients in response to seasonal field campaigns or new instrumentation breakthroughs. Cloud computing offers a scalable, economic, on-demand model well-matched to these evolving science needs. The integration of familiar science desktop tools such as Excel or Matlab with cloud computing reduces the scientists’ conceptual barrier to scale that application in the cloud. This talk presents our experiences over the last year deploying scientific applications on Azure. We highlight AzureBlast, a deep genomics data base search and MODISAzure, a science pipeline for image processing.

Baker
Special Topics
The Future of Reading, Writing + Scholarship
Tara McPherson, University of Southern California | slides

In her presentation “Animating the Archive: New Modes of Scholarly Publishing,” McPherson examined how scholarly publishing is changing through the development of online multi-media journals that transcend the limitations of print publishing and offer authors more control and wider dissemination of their scholarship. She also discussed a new publishing initiative that links archives, scholars, and presses in a more seamless workflow utilizing a new lightweight platform called Scalar.

Amit Ray, Rochester Institute of Technology | slides

Wikipedia is a collaborative endeavor. No single author can solely determine content. Individuals require the consent of their peers in order to generate, edit and moderate a Wikipedia entry. These dimensions of the text are recorded in both the talk and history pages, where all changes to an entry, as well as discussion about that entry, are archived. As a result, Wikipedia provides a dynamic and continuous written record of human interactions that is historically unprecedented. Until recently, this activity has largely been limited to individual languages.

Over the last two years, cross-linguistic activity has burgeoned on Wikipedia. In this talk, I provide a brief overview of how distributed authorship on Wikipedia operates in order to address this emerging translational activity. Currently more than 270 different languages are represented, 94 of which have 10,000 articles or more. Generally speaking, many of these Wikipedia communities are organized, transparent and self-reflexive. As a result, translational activity has increased steadily, enabled by a methodically ordered clearinghouse for human-mediated translation. I analyze these structural features in order to reflect on dimensions of power as they relate to languages and translation. Do cultural flows on Wikipedia resemble those found in other dimensions of transnational culture? In what ways do they differ? To what extent do open access models such as Wikipedia facilitate and/or resist neo-liberal forms of globalization and what are the implications for not only what we read, but how?

Lassen
4:15–4:30
Break
4:30–5:30
Closing Plenary Session
RARE: Rethinking Architectural Research and Education | slides
Chuck Thacker, Technical Fellow, Microsoft Research, Silicon Valley

By the late ’80s, the cost or chip fabrication had increased to the point that it was no longer feasible for university researchers to do architectural experimentation on real systems. Groups could no longer do the sort of experiments that led to the establishment of companies such as Sun and MIPs. Simulation replaced implementation as the experimental vehicle of choice, and papers in the field became much more incremental as researchers focused on improvements to existing techniques, rather than the exploration of new ideas at scale.

The current limits on processor performance improvement provide a strong motivation to rethink the systems that we build and study. Fortunately, the development of better design tools and methodologies, coupled with the rapid progress of field-programmable hardware, may provide a way to change the way that architectural research and education are done.

In our laboratory, we have developed Beehive, a full-system implementation of a many-core processor, as well as its memory, peripherals and a supporting tool chain for software development. Beehive is simple enough that it can be rapidly understood and modified by individuals with little hardware experience. It enables full-system experimentation at the hardware-software boundary, using inexpensive development boards and tools provided by Xilinx.

I discuss our early experiences with Beehive, including experience with its use as the basis for a short course at MIT in January.

Kodiak
5:45–6:15
Board buses and travel to Kirkland
6:30–9:00
Dinner cruise on Lake Washington

Tuesday, July 13

Time Event/Topic Location
8:00–9:00
Breakfast Hood
9:00–10:15
Opening Plenary Session
Fundamental Transformations in Research | slides
Richard DeMillo, Distinguished Professor of Computing and Management, Georgia Institute of Technology; Wolfgang Gentzsch, Professor, The Deisa Project and OGF; Tony Hey, Corporate Vice President, Microsoft Research; Ed Lazowska, Bill & Melinda Gates Chair in Computer Science & Engineering, University of Washington; Rick Rashid, Senior Vice President, Microsoft Research

Today, research is being transformed more quickly than some may realize. While the focus of the research is changing, it has also become obvious to those in the research community that the way of doing research itself is undergoing rapid and profound changes. The panel discussed this phenomenon by looking at changes in the type of research areas, as well as the new varieties of methodology that are emerging and the ways that the research community is beginning to transform itself to accommodate these new developments.

Kodiak
10:15–12:45
DemoFest
McKinley
11:45–12:45 Lunch Hood
11:45–12:45 Lunchtime Sessions
Project Hawaii: Resources for Teaching Mobile + Cloud Computing

Victor Bahl, Microsoft Research
Stewart Tansley, Microsoft Research | slides
Brian Zill, Microsoft Research | slides

Project Hawaii is a new effort investigating the ability of the cloud to enhance the end-user experience on mobile devices. The Networking Research Group at Microsoft Research is making available a set of cloud-enabled mobile services to better understand the systems and networking infrastructure needed to enable the next generation of applications. We are engaging universities to enable students to build projects using our platform.

St. Helens
Microsoft Biology Foundation | slides
Simon Mercer, Microsoft Research

The Microsoft Biology Foundation (MBF) is a library of common bioinformatics and genomics functionality built on top of the .NET Framework. Functions include parsers and writers for common bioinformatics file formats, connectors to common web services, and algorithms for assembling and aligning DNA sequences. The project is released under the OSI-compliant MS-PL open source license and is available for download. The MBF project is guided by the user community through a technical advisory board drawn from academia and commerce, with responsibility to maintain code quality and steer future development to respond to the needs of the scientific community. MBF is a community-led and community-curated project, and encourages bug fixes, feature requests, and code contributions from all members of the commercial and academic life science community.

Rainier
12:45–2:15
Breakout Sessions
Natural User Interaction
The Future of Direct Input and Interaction—Selected Perspectives
Session Co-Chairs: Ken Hinckley, Microsoft Research; Andy Wilson, Microsoft Research | slides

Patrick Baudisch, Hasso Plattner-Institute for software systems engineering GmbH, Potsdam
Saul Greenberg, University of Calgary | slides
Andy Van Dam, Brown University | slides

Direct interaction with displays is rapidly becoming one of the primary means by which people experience computing. Everyone is now familiar with multi-touch as the defining example of direct interaction, but despite rapture with the iPhone (and now iPad), multi-touch is not the whole story. Every modality, including touch, is best for something and worst for something else. Understanding direct interaction at anything beyond a superficial level requires that we answer the following question: In the holistic user experience, what is the logic of the division of labor between touch, pen, motion and postural sensing, proximal and spatial interactions beyond the display, and a diverse ecology of devices and form factors? This session delved into a number of efforts that hint at possible answers to this question, and help us understand why direct interaction is about much more than just smudging a screen with one’s stubby little fingers.

Cascade
Data–Driven Software Engineering
Large Scale Debugging
Session Chair: Judith Bishop

Galen Hunt, Microsoft Research | slides
Ben Liblit, University of Wisconsin | slides
Ed Nightingale, Microsoft Research | slides

The availability of the Internet has made possible not just the deployment of global-scale services for users, but also the deployment of global-scale data collection systems to aid debugging and understanding of software and hardware. In this session, we present three views on global-scale debugging of software and hardware in end-user systems. First, we present a project to turn large user communities into a massive distributed debugging army through sampled instrumentation and statistical modeling to help programmers find and fix problems that appear after deployment. Second, we describe Windows Error Reporting, a distributed system that automates the processing of error reports from an installed base of one billion Windows computers. Finally, we present the first large-scale analysis of hardware failure rates from a million consumer PCs.

Rainier
Operating in the Cloud
Cloud Data Center Architectures | slides
Session Chair: Dennis Gannon, Microsoft Research
Dileep Bhandarkar, Microsoft; Yousef Khalidi, Microsoft
Watt Matters in Mega Datacenters?—Dileep Bhandarkar, Microsoft

This presentation provided an overview of emerging datacenter and server challenges and share some best practices that present opportunities for our industry to improve the total cost of ownership and energy efficiency.

Energy efficiency has been a major focus of our industry and members of the Green Grid have made the use of PUE as a useful metric for data center power usage effectiveness. While we believe that PUE is a good metric and we track it in all our datacenters, there is more to achieving energy efficiency. We see rightsizing our servers and optimizing for work done per KW as a more important metric.

We look at the Total Cost of Ownership that includes server purchase price, datacenter capital costs, and application specific power consumption, PUE and management costs. Using this holistic approach that that accounts for both server and datacenter designs allows us to achieve the best overall efficiency.

Performance per Watt per Dollar is a key metric that we focus our attention on.

Cloud Computing—Challenges and Opportunities—Yousef Khalidi, Microsoft

Cloud computing has the well-publicized advantages of reduced costs and increased agility. To reap these benefits, resources are typically allocated from a global large-scale computing infrastructure, highly shared among many applications and customers. This talk starts with an overview of Windows Azure, Microsoft’s cloud computing platform, as a concrete example of a large scale highly shared cloud. The talk then presents some of the challenges of cloud computing, including security, evolving applications for cloud computing, and the federation of multiple clouds. The talk explores the tension between, on one hand, the desire for reduced cost and increased agility, and on the other hand, securing applications and their data. The talk also discusses the need to evolve the application model to truly attain the advantages of cloud computing.

St. Helens
The Challenge of Large Data
Opportunities for Libraries

Paul Courant, University of Michigan | slides
Michael Keller, Stanford University | slides
James Mullins, Purdue University

Paul talked about a number of transformational effects of digital technologies on academic libraries, including the Google digitization project, the misalignment of intellectual property law with scholarship in the digital age, the development of large scale cooperative activities such as the HathiTrust. He also looked at policy: how can libraries and faculties (and others, including legislators) shape those transformations for better or for worse.

New Modes of Discovery and Analysis; Opportunities for Libraries in Support of Teaching, Learning, and Research—Michael Keller, Stanford University

The amazing expansion of the World Wide Web and its capabilities along with the predictable advances in information technology and network systems present now new possibilities for improved discovery of information and knowledge across the vast panoply of physical and digital carriers. Examples of these opportunities, involving not just Semantic Web and Linked Data technologies, but as well alternate means of searching and navigation were presented. The widening pool of digital objects, thanks to innovation in scholarly publishing and more recently in digitization and trade publishing, such as the Google Book Search program and the more common issuance of e-books in parallel with physical books as well as the startling rise of Web-based vanity publishing, provides new possibilities for research in methods as well as in hypotheses to be investigated. Examples of these opportunities were presented.

Opportunities for Libraries: Bringing Data to the Fore in Scholarly Communication and the Potential Implications for Promotion and Tenure | slides
James Mullins, Purdue University

In most scientific and engineering fields, datasets are the lifeblood of research. Modeling demands data to run complex algorithms to test a theory or advance a methodology. Yet, datasets have not been recognized as an essential element of the scholarly communication paradigm. The published research article or the presentation of a paper at a conference is the primary means used to assess the “impact” of a scientist or engineer in his/her field. However, the underlying data, whether gathered by the individual researcher or obtained from research colleagues, make the research possible and replicable.

Can data become recognized as a major component of the research process? How can data be discoverable, retrievable, accredited, and, lastly, citable to facilitate and assess impact? Can the producer of the data be seen as a significant and major contributor to the research in a specific field? Could the dataset creator be credited and recognized at promotion and tenure for furthering research and having significant impact within the field? This presentation explored these issues along with the role of libraries in data management and the assignment of digital object identifiers (DOIs) to datasets by the recently established international consortium, DataCite.

Baker
Special Topics
NodeXL – Social Network Analysis in Excel

Natasa Milic Frayling, Microsoft Research | slides
Ben Shneiderman, University of Maryland | slides
Marc Smith, Connected Action

Businesses, entrepreneurs, individuals, and government agencies alike are looking to social network analysis (SNA) tools for insight into trends, connections, and fluctuations in social media. Microsoft’s NodeXL is a free, open-source SNA plug-in for use with Excel. It provides instant graphical representation of relationships of complex networked data. But it goes further than other SNA tools—NodeXL was developed by a multidisciplinary team of experts that bring together information studies, computer science, sociology, human-computer interaction, and over 20 years of visual analytic theory and information visualization into a simple tool anyone can use. This makes NodeXL of interest not only to end-users but also to researchers and students studying visual and network analytics and their application in the real world. NodeXL has the unique feature that it imports networks from Outlook email, Twitter, flickr, YouTube, WWW, and other sources, plus it offers a rich set of metrics, layouts, and clustering algorithms. This talk describes NodeXL and our efforts to start the Social Media Research Foundation.

Lassen
2:15–2:30 Break
2:30–4:00
Natural User Interaction
Natural Language Interaction Today—Selected Perspectives
Alex Acero, Microsoft Research | slides

Speech recognition has been proposed for many years as a natural way to interact with computer systems, but this has taken longer than originally expected. At first dictation was thought to be the killer speech app, and researchers were waiting for Moore’s law to get the required computational power in low-cost PCs. Then it turned out that, due to social and cognitive reasons, many users do not want to dictate to their computers even if it’s inexpensive and technically feasible. Speech interfaces that users like require not only a sufficiently accurate speech recognition engine, but also many other less well known factors. These include a scenario where speech is the preferred modality, proper end-to-end design, error correction, robust dialog, ease of authoring for non-speech experts, data-driven grammars, a positive feedback loop with users, and robustness to noise. In this talk I describe some of the work we have done in MSR on building natural user interfaces using speech technology, and I illustrate it with a few scenarios in gaming (Xbox Kinnect), speech in automotive environments (Hyundai/Kia UVO, For SYNC), etc.

Alexander I Rudnicky, Carnegie Mellon University | slides

Robots are on their way to becoming an ubiquitous part of human life as companions and workmates. Integration with human activities requires effective communication between humans and robots. Humans need to be able to explain their intentions and robots need to be able to share information about themselves and ask humans for guidance. Language-based interaction (in particular spoken language) offers significant advantages for efficient communication particularly in groups. We have been focusing on three aspects of the problem: (a) managing multi-party dialogs (defining the mechanisms that regulate an agent’s participation in a conversation); (b) effective coordination and sharing of information between humans and robots (such as mechanisms for grounding descriptions of the world in order to support a common frame of reference); (c) instruction-based learning (to support dynamic definition of new behavior patterns through spoken as well as multi-modal descriptions provided by the human). This talk describes the TeamTalk system, the framework for exploring these issues.

Cascade
Data-Driven Software Engineering
Dynamic Languages and the Browsers of the Future
Session chair: Wolfram Schulte, Microsoft Research

Judith Bishop, Microsoft Research | slides
Steve Lucco, Microsoft
Ben Zorn, Microsoft Research | slides

The Common Language Runtime of the .NET framework was recently extended to enable scripting languages – JavaScript, Lisp, Python, Ruby – to communicate with other infrastructures and services, including Silverlight and COM. This session introduces the DLR and goes on to consider how dynamic languages have enabled today’s Web 2.0 world. Firstly, we explore how real web applications use JavaScript and consider whether benchmarks, such as SunSpider, are representative of actual applications. This comparison is useful in understanding the best ways to design JavaScript engines further empower increasingly important web applications. Surprisingly, until recently, the behavior characteristics of JavaScript programs have not been examined. Then we look at Microsoft’s upcoming offering, IE9 which will feature a new Javascript engine called Chakra. At the latest platform preview release of IE9, Chakra was substantially faster on the Sunspider benchmark than Firefox and was nipping at the heels of Safari 5. We’ll explore some of the technical decisions that make Chakra especially effective at loading Web pages. We’ll also take a look at some details of how the Chakra engine speeds up web applications such as Office Web Word. Finally, we’ll talk about the architecture of the Chakra memory recycler, which supports fast interactive response and simple interoperation with native code.

Rainier
Operating in the Cloud
Programming in the Cloud | slides
Session Chair: Dennis Gannon, Microsoft Research
Jim Larus, Microsoft Research

Cloud computing provides a platform for new software applications that run across a large collection of physically separate computers and free computation from the computer in front of a user. Distributed computing is not new, but the commodification of its hardware platform—along with ubiquitous networking; powerful mobile devices; and inexpensive, embeddable, networkable computers—heralds a revolution comparable to the PC.

Software development for the cloud offers many new (and some old challenges) that are central to research in programming models, languages, and tools. The language and tools community should embrace this new world as fertile source of new challenges and opportunities to advance the state of the art.

Jimmy Lin, University of Maryland-College Park

Over the past couple of decades, the field of natural language processing (and more broadly, human language technology) has seen the emergence and later dominance of empirical techniques and data-driven research. An impediment to research progress today is the need for scalable algorithms to cope with the vast quantities of available data.

The only practical solution to large-data challenges today is to distribute computations across multiple machines. Cluster computing, however, is fraught with challenges ranging from scheduling to synchronization. Over the past few years, MapReduce has emerged as an attractive alternative to traditional programming models: it provides a simple functional abstraction that hides many system-level issues, allowing the researcher to focus on solving the problem.

In this talk, I overview “cloud computing” projects at the University of Maryland that grapple with the issue of scalability in text processing applications.

St. Helens
The Challenge of Large Data
Visualizing a Universe of Data | slides
Walter Alvarez and Roland Saekow, University of California, Berkeley; Curtis Wong, Microsoft Research

Our knowledge of human history comprises a truly vast data set, much of it in the form of chronological narratives written by humanist scholars and difficult to deal with in quantitative ways. The last 20 years has seen the emergence of a new discipline called Big History, invented by the Australian historian, David Christian, which aims to unify all knowledge of the past into a single field of study. Big History invites humanistic scholars and historical scientists from fields like geology, paleontology, evolutionary biology, astronomy, and cosmology to work together in developing the broadest possible view of the past. Incorporating everything we know about the past into Big History greatly increases the amount of data to be dealt with.

Big History is proving to be an excellent framework for designing undergraduate synthesis courses that attract outstanding students. A serious problem in teaching such courses is conveying the vast stretches of time from the Big Bang, 13.7 billion years ago to the present, and clarifying the wildly different time scales of cosmic history, Earth and life history, human prehistory, and human history. We present “ChronoZoom,” a computer-graphical approach to dealing with this problem of visualizing and understanding time scales, and presenting vast quantities of historical information in a useful way. ChronoZoom is a collaborative effort of the Department of Earth and Planetary Science at UC Berkeley, Microsoft Research, and originally Microsoft Live Labs.

Our first conception of ChronoZoom was that it should dramatically convey the scales of history, and the first version does in fact do that. To display the scales of history from a single day to the age of the Universe requires the ability to zoom smoothly by a factor of ~1013, and doing this with raster graphics was a remarkable achievement of the team at Live Labs. The immense zoom range also allows us to embed virtually limitless amounts of text and graphical information.

We are now in the phase of designing the next iteration of ChronoZoom in collaboration with Microsoft Research. One goal will be to have ChronoZoom be useful to students beginning or deepening their study of history. We therefore show a very preliminary version of a ChronoZoom presentation of the human history of Italy designed for students, featuring (1) a hierarchical periodization of Italian history, (2) embedded graphics, and (3) an example of an embedded technical article. This kind of presentation should make it possible for students to browse history, rather than digging it out, bit by bit.

At a different academic level, ChronoZoom should allow scholars and scientists to bring together graphically a wide range of data sets from many different disciplines, to search for connections and causal relationships. As an example of this kind of approach, from geology and paleontology, we are inspired by TimeScale Creator.

ChronoZoom, by letting us move effortlessly through this enormous wilderness of time, getting used to the differences in scale, should help to break down the time-scale barriers to communication between scholars and scientists, and to make the past at all scales comprehensible as never before.

Baker
Special Topics
What We Know and What You Can Do: Learning How to Turn Gender Research into Diversity Action | slides
Joanne Cohoon, University of Virginia; Carla Ellis, Duke University; Lucy Sanders, National Center for Women and Information Technology (NCWIT); Telle Whitney, Anita Borg Institute for Women and Technology

This panel brings together the three major organizations that focus on research on gender diversity across the pipeline. We present the current research and discuss results that provide actions for the academic and industrial communities to implement at their institutions. It is the expectation that attendees leave the session with activities and ideas that apply to their environment.

Lassen
4:00–5:30
Closing Plenary Sessions Kodiak
4:00–5:15
The Making of Avatar: Magnificent Graphics, Multitudinous Files, Massive Storage
Richard Baneham, Animation Supervisor, Avatar; Yuri Bartoli, Virtual Art Director, Avatar; Tim Bicio, Chief Technical Officer, Avatar; Nadine Kano, Senior Director, Solution Management, Microsoft, Matt Madden, Motion Capture Supervisor, Avatar

James Cameron’s Avatar reigns as the most successful motion picture ever released, but it is also the most computing intensive movie made to date. The number of dollars earned at the box office worldwide (more than 2.5 billion) is dwarfed by the number of bytes of data generated during production: more than 1 petabyte. Sixty percent of Avatar’s 2 hours and 40 minutes were computer generated. A single frame, at 24 frames per second, could be up to 3 GB in size—and that’s just for one eye. This session features special guests from the Avatar production team who talked about the groundbreaking techniques used to create the immersive world of Pandora and its photo-real creatures and characters, and how they managed the volumes of data necessary to bring them to life.

5:15–5:30
Conference Conclusion and Call to Action

Tony Hey, Corporate Vice President, Microsoft Research

 

Speakers

Speaker Biographies

Alex Acero, Microsoft Research

Alex Acero is research area manager in Microsoft Research, directing an organization with 60 engineers working on audio, multimedia, communication, speech, and natural language. He is also an affiliate professor of Electrical Engineering at the University of Washington. He received a M.S. degree from the Polytechnic University of Madrid, Madrid, Spain, in 1985; a M.S. degree from Rice University, Houston, TX, in 1987; and a Ph.D. degree from Carnegie Mellon University, Pittsburgh, PA, in 1990, all in Electrical Engineering. Dr. Acero worked in Apple Computer’s Advanced Technology Group during 1990–1991. In 1992, he joined Telefonica I+D, Madrid, Spain, as manager of the speech technology group. Since 1994, he has been with Microsoft Research.

Dr. Acero is a Fellow of IEEE. He has served the IEEE Signal Processing Society as Vice President Technical Directions (2007–2009), Director Industrial Relations (2009–2011), 2006 Distinguished Lecturer, member of the Board of Governors (2004–2005), associate editor for IEEE Signal Processing Letters (2003–2005) and IEEE Transactions of Audio, Speech and Language Processing (2005–2007), and member of the editorial board of IEEE Journal of Selected Topics in Signal Processing (2006–2008) and IEEE Signal Processing Magazine (2008–2010). He also served as member (1996–2000) and Chair (2000–2002) of the Speech Technical Committee of the IEEE Signal Processing Society. He was Publications Chair of ICASSP98, Sponsorship Chair of the 1999 IEEE Workshop on Automatic Speech Recognition and Understanding, and General Co-Chair of the 2001 IEEE Workshop on Automatic Speech Recognition and Understanding. Dr. Acero served as member of the editorial board of Computer Speech and Language and member of Carnegie Mellon University Dean’s Leadership Council for College of Engineering.

Dr. Acero is author of the books Acoustical and Environmental Robustness in Automatic Speech Recognition (Kluwer, 1993) and Spoken Language Processing (Prentice Hall, 2001), has written invited chapters in 4 edited books and 200 technical papers. He holds 78 U.S. patents. Since 2004, Dr. Acero, along with co-authors Drs. Huang and Hon, has been using proceeds from their textbook Spoken Language Processing to fund the “IEEE Spoken Language Processing Student Travel Grant” for the best ICASSP student papers in the speech area.

Deb Agarwal, University of California–Berkeley

Deb Agarwal is Advanced Computing for Science Department Head and the Data Intensive Systems Group Lead at the Lawrence Berkeley National Laboratory. As a member of the Berkeley Water Center collaboration between University of California, Berkeley, and Lawrence Berkeley National Laboratory, Dr. Agarwal leads a team developing data server infrastructure to significantly enhance data browsing and analysis capabilities and enable eco-science synthesis at the watershed-scale to understand hydrologic and conservation questions and at the global-scale to understand carbon flux. Dr. Agarwal’s research focuses on scientific tools that enable sharing of scientific experiments, advanced networking infrastructure to support sharing of scientific data, and data analysis support infrastructure for eco-science. Dr. Agarwal holds a PhD in Electrical and Computer Engineering from University of California, Santa Barbara and a BS in Mechanical Engineering from Purdue University.

Blaise Agüera y Arcas, Microsoft

Blaise Agüera y Arcas is the architect of Bing Mobile and Mapping at Microsoft. He works in a variety of roles, from designer and coder to strategist, and he leads an Applied Research team of researchers and engineers with strengths in social media, computer vision, and graphics. He joined Microsoft when his startup company, Seadragon, was acquired by Live Labs in 2006. Shortly after the acquisition of Seadragon, Blaise directed his team in a collaboration with Microsoft Research and the University of Washington, leading to the first public previews of Photosynth several months later. His TED talk on Seadragon and Photosynth in 2007 is still rated “most jaw-dropping” on ted.com. He returned to the TED stage in 2010 to present Bing Maps.

Blaise has a broad background in computer science and applied math, and has worked in a variety of fields, including computational neuroscience, computational drug design, and data compression. In 2001, he received press coverage for his discovery, using computational methods, of the printing technology used by Johann Gutenberg. Blaise’s work on early printing was the subject of a BBC Open University documentary, entitled “What Did Gutenberg Invent?” He has published essays and research papers in theoretical biology, neuroscience, and history.

In 2008–2009, he was a recipient of MIT Technology Review’s TR35 award (35 top innovators under 35) and Fast Company’s MCP100 (“100 most creative people in business”); in 2010, he’s on Advertising Age’s Creativity 50 (“annual list of the most influential and inspiring creative personalities of the last year”).

Walter Alvarez, University of California–Berkeley

Walter Alvarez received his PhD in geology at Princeton. His thesis research (and honeymoon) was in a roadless desert in Colombia, living with Guajiro Indians and smugglers.

Much of his research has been in Italy, where he worked on archeological geology in Rome, on the tectonics of the geologically complex Mediterranean, and on Earth’s magnetic reversals recorded in deep-water limestones in the Apennines.

In 1977, he joined the faculty at U.C. Berkeley and began a study of the mass extinction at the end of the Cretaceous Period. Evidence from iridium measurements suggested that the extinction was due to impact on the Earth of a giant asteroid or comet, and many years later that hypothesis was confirmed by the discovery of a huge impact crater, buried beneath the subsurface of the Yucatán Peninsula, dating from precisely the time of the Cretaceous-Tertiary extinction.

He is currently interested in Big History, the new field that aims to tie everything in our planet’s past into a coherent understanding of the grand sweep and character of history.

Alvarez has received honorary doctorates from the University of Siena in Italy and the University of Oviedo in the Principality of Asturias in Spain, where his family originated.

Victor Bahl, Microsoft Research

Victor Bahl is a principal researcher and founding manager of the Networking Research Group in Microsoft Research Redmond. He is responsible for shaping Microsoft’s long-term vision related to networking technologies through research and associated policy engagement with governments and institutions around the world. He directs research activities that push the state-of-art in the networking of devices and systems. He and his group build proof-of-concept systems, engage with academia, publish papers in prestigious conferences and journals, publish software for the research community, and work with product groups to influence Microsoft products.

His personal research interests span a variety of topics in mobile networking, wireless systems design, datacenter networking, and enterprise networking and management. He has built and deployed several seminal and highly cited networked systems, with a total of more than 9,500 citations; he has authored more than 110 peer-reviewed papers and 120 patent applications, 74 of which have issued; he has won best paper awards at SIGCOMM and CoNext and has delivered morethan two dozen keynote and plenary talks.

He is the founder and past chairperson of ACM SIGMOBILE, the founder and steering committee chair of the MobiSys; and the founder and past editor-in-chief of ACM Mobile Computing and Communications Review. He has served as the general chair of several IEEE and ACM conferences including SIGCOMM and MobiCom, and is serving on the steering committees of seven IEEE & ACM conferences and workshops, several of which he co-founded; he has been serving as the chair of ACM’s Outstanding Contributions Award committee related to mobility for more than 15 years. He has served on the board of more than half a dozen journals, on several NSF, NRC and FCC panels, and on more than six dozen program committees. Dr. Bahl received Digital’s Doctoral Engineering Fellowship Award in 1995 and SIGMOBILE’s Distinguished Service Award in 2001. In 2004, Microsoft nominated him for the innovator of the year award. He became an ACM Fellow in 2003 and an IEEE Fellow in 2008.

When not working, he loves to read, travel, eat in fine restaurants, watch competitive sports and action movies, and spend time drinking with friends and family.

Thomas Ball, Microsoft Research

Thomas Ball is principal researcher at Microsoft Research where he manages the Software Reliability Research group. Tom received a PhD from the University of Wisconsin–Madison in 1993, was with Bell Labs from 1993 to 1999, and has been at Microsoft Research since 1999. He is one of the originators of the SLAM project, a software model checking engine for C that forms the basis of the Static Driver Verifier tool. Tom’s interests range from program analysis, model checking, testing, and automated theorem proving to the problems of defining and measuring software quality.

Richard Baneham, Avatar

Richard Baneham is an Academy Award and BAFTA Award winner for Best Visual Effects for his work as the animation supervisor on Avatar. He worked closely with the actors to try to produce a CG performance that evoked a full emotional range and idiosyncratic nuances of their performances.

Richie also worked as an animation supervisor on Lord of the Rings: The Two Towers and The Return of the King where he was integrally involved in the breakthrough character of Gollum with actor Andy Serkis. His body of work also includes The Chronicles of Narnia: The Lion, the Witch and the Wardrobe and Cats and Dogs, one of the earliest examples of fully integrated CG characters.

Richie hails from a traditional background, which includes the move The Iron Giant, one of Richie’s favorites. This was a transitional movie, where Richie worked on 2D and 3D simultaneously. He was educated in Classical Animation at Dublin’s Ballyfermot College of Art & Design, Ireland, and in 1994 moved to L.A. to pursue his career in Hollywood.

Roger Barga, Microsoft Research

Roger Barga is currently Senior Architect of the Cloud Computing Futures (CCF) group in Microsoft Research, where he leads a technical engagements team that works with external researchers interested in carrying out large scale computing, scientific research, and data analytics on the Windows Azure cloud platform. Previously, Roger led the Microsoft Research Advanced Research Tools and Services (ARTS) team, which built innovative services and tools for data intensive research. Roger joined Microsoft in 1997 as a researcher in the Database group of Microsoft Research, where he participated in both systems research and product development efforts in database, workflow, and stream processing systems.

Mike Barnett, Microsoft Research

Mike Barnett is a research software design engineer in the RiSE group in Microsoft Research. He has been at Microsoft since 1995. He previously worked on the Spec# language and is interested in all things programming language.

Yuri Bartoli, Avatar

Coming from a background in traditional art and illustration in New York, Yuri Bartoli moved to California to work as a concept artist on Star Wars: Attack of the Clones. He has been working in the visual effects, commercial, and film industry for more than 10 years at companies such as Pacific Data Images and Dreamworks Animation, and as a visual effects art director on shows like Evolution, Minority Report, and AI, honing his skills in art and matte painting as well as computer graphics, then transitioning into feature animation on films like Madagascar and Shrek 2.

In May 2005, he began work on Avatar as one of four artists initially hired to design the creatures and environments of Pandora. The journey lasted almost five years, from the very beginning of production to the end. From design he moved on to supervising the Virtual Art Department, creating the reqal-0tuime set pieces for the motion capture and virtual camera systems of the Volume, working closely with director James Cameron to create world that existed only in the computer. As production wrapped and moved into post, Yuri worked closely with vendors on the Visual Effects side to realize the full extent of the environments, creatures and graphics that the film required.

Patrick Baudisch, Hasso Plattner Institute

Patrick Baudisch is a professor in Computer Science at Hasso Plattner Institute in Berlin/Potsdam and chair of the Human Computer Interaction Lab. His research focuses on the miniaturization of mobile devices and touch input. Previously, Patrick Baudisch worked as a research scientist in the Adaptive Systems and Interaction Research Group at Microsoft Research and at Xerox PARC and served as an Affiliate Professor in Computer Science at the University of Washington. He holds a PhD in Computer Science from Darmstadt University of Technology, Germany.

Dileep Bhandarkar, Microsoft

Dr. Dileep Bhandarkar joined Microsoft as a distinguished engineer responsible for server hardware architecture and standards for Global Foundation Services (GFS) in May 2007. He is currently chief architect for GFS responsible for the technology roadmap for the compute infrastructure for Microsoft’s online services. He was elected an IEEE Fellow in 1997 for contributions and technical leadership in the design of complex and reduced instruction set architecture and in computer system performance analysis.

He has previously held senior technical and management positions at Intel Corporation, Digital Equipment Corporation, and Texas Instruments.

Dr. Bhandarkar holds 16 U.S. Patents and has published more than 30 technical papers in various journals and conference proceedings. He is also the author of a book titled, Alpha Implementations and Architecture. In 1998, he was recognized as a Distinguished Alumnus of the Indian Institute of Technology, Bombay, where he received his B. Tech in Electrical Engineering in 1970. He also has an M.S. and Ph.D. in Electrical Engineering from Carnegie Mellon University, and has done graduate work in Business Administration at the University of Dallas.

Tim Bicio, Avatar

Tim Bicio, Chief Technology Officer for James Cameron’s Avatar, has been supporting technology and digital content in film since the early 90’s. In 2000, Tim was contracted by the Wachowski Brothers to build and manage a Digital Asset Management System, “Zion”, for the second and third installments of the Matrix trilogy. Zion cataloged everything from set designs to VFX finals. At the end of the project, the database contained roughly 60,000 records at 1.5 Terabytes of binary data. In 2005, with help from Microsoft Consulting Services, Tim was contracted to design and develop “Gaia,” Cameron’s Digital Asset Management System. Like Zion, Gaia’s initial core function was to securely archive and categorize digital assets as they are created through the course of production—for review, distribution, and use on future/related projects. In addition to the system’s core functions, Gaia was designed to manage digital operations on Cameron’s virtual production and live action stages. At the moment, Gaia contains nearly 1 million distinct assets at 90 Terabytes of binary data. Tim is presently working with Microsoft and Cameron’s team on the next generation of Gaia.

Judith Bishop, Microsoft Research

Judith Bishop is Director of Computer Science in External Research at Microsoft Research, Redmond, where she devises strategy and implements programs to create strong links between Microsoft’s research groups and universities globally. She represents Microsoft on ACM task forces and is actively involved in the CRA and IFIP. Her research expertise is in programming languages and distributed systems, with a strong practical bias and an interest in compilers and design patterns. She has more than 90 publications, including 15 books on programming languages that are available in six languages and read worldwide. Judith has a distinguished background in academia, having taught in the UK, Germany, Canada, Italy, and the United States, before joining Microsoft from the University of Pretoria, South Africa, in 2009. Judith serves frequently on international editorial, programme, and award committees, and has received numerous awards and distinctions, most recently the IFIP Outstanding Service Award in 2009 and the SA Computer Society Fellowship Award in 2008. She is a Fellow of the British Computer Society and the Royal Society of South Africa, among others.

Mark Bolas, University of Southern California

Mark Bolas is an associate professor at the School of Cinematic Arts and the Institute for Creative Technology at USC. His work focuses on immersive experience design and embodied interface design with an emphasis on the consideration of the interplay between emotion, cognition, and perception. Mark is the Chairman of Fakespace Labs, has designed a number of tactile interface devices including early work on multi-user workbench and tabletop systems, and was the recipient of the IEEE 2005 award for Seminal Technical Achievement in Virtual & Augmented Reality by the Visualization and Graphics Technical Committee.

Joanne Cohoon, University of Virginia

Dr. Cohoon works to understand and improve the gender composition of computing. She holds positions as Senior Research Scientist at the National Center for Women & IT (NCWIT) and as Assistant Professor of Science, Technology, and Society at the University of Virginia. Cohoon also serves as a member of the CRA-W Board, with responsibility for coordinating and enhancing project evaluation. Cohoon’s work at NCWIT involves the translation, application, dissemination, and evaluation of research findings about gender, education, technology, organizations, and inequality. Using perspectives and methods from sociology, she also conducts nationwide studies of recruitment and retention in computing at the undergraduate and graduate levels. Her results are reported in scholarly journals and an award-winning book, co-edited with William Aspray, from MIT Press, Women and Information Technology, Research on Underrepresentation.

Paul N. Courant, University of Michigan

Paul N. Courant is University Librarian and Dean of Libraries, Harold T. Shapiro Collegiate Professor of Public Policy, Arthur F. Thurnau Professor, professor of economics and professor of information at the University of Michigan. From 2002 to 2005 he served as Provost and Executive Vice-President for Academic Affairs, the chief academic officer and the chief budget officer of the University. He has also served as the Associate Provost for Academic and Budgetary Affairs, Chair of the Department of Economics and Director of the Institute of Public Policy Studies (which is now the Gerald R. Ford School of Public Policy). In 1979 and 1980 he was a senior staff economist at the Council of Economic Advisers.

Courant has authored half a dozen books, and more than seventy papers covering a broad range of topics in economics and public policy, including tax policy, state and local economic development, gender differences in pay, housing, radon and public health, relationships between economic growth and environmental policy, and university budgeting systems. More recently, his academic work has considered the economics of universities, the economics of libraries and archives, and the effects of new information technologies and other disruptions on scholarship, scholarly publication, and academic libraries.

Paul Courant holds a BA in History from Swarthmore College (1968); an MA in Economics from Princeton University (1973); and a PhD in Economics from Princeton University (1974).

Mercè Crosas, Harvard University

Dr. Mercè Crosas is the director of product development for the Institute of Quantitative Social Science (IQSS) at Harvard University. Dr. Crosas first joined IQSS in 2004 (then referred to as the Harvard-MIT Data Center) as manager of the Dataverse Network project. The product development team at IQSS now includes the Dataverse Network project, the Murray Research Archive, and the statistical and web development projects (OpenScholar and Zelig). Before joining IQSS, she worked for five years in the educational software and biotech industry, initially as a software developer, and later as manager and director of IT and software development. Prior to that, she was at the Harvard-Smithsonian Center for Astrophysics where she completed her doctoral thesis as a student fellow with the Atomic and Molecular Physics Institute, and afterwards she was a post-doctoral fellow, a researcher, and a software engineer with the Radioastronomy division. Dr. Crosas holds a PhD in Astrophysics from Rice University and graduated with a BS in Physics from the Universitat de Barcelona, Spain.

Christoph Csallner, University of Texas at Arlington

Christoph Csallner is an assistant professor in the Computer Science and Engineering Department at the University of Texas at Arlington (UTA). Before joining UTA, he worked for Google and Microsoft Research and received a PhD degree from Georgia Tech. He has received two Distinguished Paper Awards, the first one at the 2006 ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) and the second one at the 2007 IEEE/ACM International Conference on Automated Software Engineering (ASE).

Mary Czerwinski, Microsoft Research

Mary Czerwinski is a research area manager of the Visualization and Interaction for Business and Entertainment (VIBE) research group at Microsoft Research. The group is responsible for studying and designing advanced technology and interaction techniques that leverage human capabilities across a wide variety of input and output channels. Mary’s primary research areas include studying group awareness systems, information visualization, and task switching. Mary has been an affiliate assistant professor at the Department of Psychology, University of Washington, since 1996. She has also held positions at Compaq Computer Corporation, Rice University, Lockheed Engineering and Sciences Corporation, and Bell Communications Research. She received a PhD in cognitive psychology from Indiana University in Bloomington. Mary is active in the field of Human-Computer Interaction, publishing and participating in a wide number of conferences, professional venues, and journals.

Peli de Halleux, Microsoft Research

Jonathan “Peli” de Halleux is a senior research software design engineer in the Research in Software Engineering group at Microsoft Research in Redmond, Washington, where he has been since October 2006 working on the Pex project. From 2004 to 2006, he worked in the Common Language Runtime (CLR) as a software design engineer in test in charge of the Just In Time compiler. Before joining Microsoft, he earned a PhD in Applied Mathematics from the Catholic University of Louvain. Earlier, he developed the unit testing framework MbUnit.

Rob DeLine, Microsoft Research

Robert DeLine is a principal researcher at Microsoft Research, working at the intersection of software engineering and human-computer interaction. His research group designs development tools in a user-centered fashion: they conduct studies of development teams to understand their work practice and prototype tools to improve that practice. Rob has a background in both HCI and software engineering. His master’s thesis was the first version of the Alice programming environment (University of Virginia, 1993), and his PhD was in software architecture (Carnegie Mellon University, 1999).

Richard DeMillo, Georgia Institute of Technology

Rich DeMillo is distinguished professor of Computing and Management at Georgia Tech. From 2002 to 2009, he was the John P. Imlay Dean of Computing at Tech. He has held senior executive appointments at Hewlett-Packard, where he was the chief technology officer, Telcordia Technologies, where he was Vice President for Computing Research, and the National Science Foundation, where he directed the Computing Research Division. He has also been on the faculty of Purdue University, the University of Padua, and the University of Wisconsin-Milwaukee. His current research focuses on computer security and on the future of higher education. His book, Abelard to Apple: The Fate of American Colleges and Universities in the 21st Century, will be published next year by MIT Press.

Carla Ellis, Duke University

Carla Schlatter Ellis is a Professor Emerita of Computer Science at Duke University. She received her PhD degree in Computer Science from the University of Washington, Seattle, in 1979. Before coming to Duke as an associate professor in 1986, she was a member of the Computer Science faculties at the University of Oregon, Eugene, from 1978 to 1980, and at the University of Rochester, Rochester, NY, from 1980 to 1986. She is currently a member of the CRA Committee on the Status of Women in Computing Research (CRA-W). She previously served on the board of the Computing Research Association (CRA), as co-chair of the Academic Alliance of the National Center for Women & IT (NCWIT), as co-chair of CRA-W, and as Editor-in-Chief of ACM Transactions on Computing Systems. Her research interests are in operating systems, mobile/wireless computing, and sustainability as it applies to computing.

Steven K. Feiner, Columbia University

Steve Feiner is Professor of Computer Science at Columbia University, where he directs the Computer Graphics and User Interfaces Lab. Prof. Feiner is coauthor of Computer Graphics: Principles and Practice and of Introduction to Computer Graphics, received an ONR Young Investigator Award, and together with his students, has won best paper awards at ACM UIST, ACM CHI, ACM VRST, and IEEE ISMAR. His lab created the first mobile augmented reality system using a see-through display in 1996, and pioneered applications of augmented reality to fields such as tourism, journalism, maintenance, and construction. In recent years, Prof. Feiner has been general chair or co-chair for ACM VRST 2008 (15th Symposium on Virtual Reality Software and Technology), INTETAIN 2008 (Second International Conference on Intelligent Technologies for Interactive Entertainment), and ACM UIST 2004 (17th Symposium on User Interface Software and Technology); and doctoral symposium chair for ACM UIST 2009 and 2010.

Anthony Finkelstein, University College London

Anthony Finkelstein is professor of Software Systems Engineering at University College London (UCL), a leading UK research university. He is a visiting professor at Imperial College London and at the National Institute for Informatics, Tokyo, Japan. He is currently Dean of the Faculty of Engineering Sciences. He has published more than 220 scientific papers He is a Fellow of both the Institution of Engineering & Technology (IET) and the British Computer Society (BCS). In 2010 he received a special “outstanding contribution” award from the International Conference on Software Engineering. In 2009, he received the Oliver Lodge Medal of the IET for achievement in Information Technology. He has been recognised for his contributions to the field of requirements engineering and for his professional service by the IEEE. He was a winner of the International Conference on Software Engineering “most influential paper” prize for work on “viewpoints” and a winner of the Requirements Engineering “most influential paper” prize for work on traceability. He was a member of the winning team of the first Times Higher Education Research Project of the Year. He has served on numerous editorial boards including that of ACM TOSEM and IEEE TSE, and was founder editor of Automated Software Engineering. He also chaired numerous international meetings and was General Chair of the International Conference on Software Engineering. He has provided consultancy advice to a very large number of high profile companies and government organisations. He has established three successful “spinout” companies providing respectively professional services, product software, and an innovative software service.

Danyel Fisher, Microsoft Research

Danyel Fisher is a researcher in Microsoft Research Redmond’s Computer-Supported Collaboration and Visualization research group. He joined Microsoft Research in 2004 and has since worked on visualizations that help us understand how people talk to each other, how servers are running, and how people use websites. He got his Ph.D. from the University of California, Irvine, in 2004 and his master’s degree from the University of California, Berkeley, in 2000.

Andrew Fitzgibbon, Microsoft Research

Andrew Fitzgibbon is a senior researcher at Microsoft Research, Cambridge, UK. His research interests are in the intersection of computer vision and computer graphics, with excursions into neuroscience. Recent papers have been on the recovery of 3D geometry from 2D images, general-purpose camera calibration, human 3D perception, and the application of natural image statistics to problems of figure/ground separation and new-view synthesis.

He has twice received the IEEE’s Marr Prize, the highest in computer vision; and software he wrote won an Engineering Emmy Award in 2002 for significant contributions to the creation of complex visual effects.

He studied Mathematics and Computer Science at University College Cork, and received his PhD from Edinburgh University in 1997. Until June 2005 he held a Royal Society University Research Fellowship at Oxford University’s Department of Engineering Science.

Jeffrey Friedberg, Microsoft

Jeffrey Friedberg is chief trust architect for Microsoft. He drives the End to End Trust initiative, which seeks to create a safer, more trusted Internet. This effort includes investigating ways to make privacy and security features more usable for consumers and businesses. He speaks publicly on strategies for reducing Internet threats such as identity theft and has testified before congress on protecting users from spyware. He co-authored the Microsoft Privacy Standard for Development and was responsible for Windows Privacy. Previously at Microsoft he focused on privacy and legal issues relating to the Windows Media Platform and was a group program manager for Microsoft’s graphics software. He has more than 25 years of software development experience and has delivered products that range from graphics supercomputers used in medical imaging to next generation gaming devices. As VP of Engineering at Silicon Gaming, he helped launch an IPO and chaired the Gaming Manufacturers Association. At Digital Equipment Corporation, he co-architected the industry standard 3D graphics extensions for the MIT X Window System. In addition to being a Certified Information Privacy Professional, he has a formal background in Computer Graphics and a B.S. degree in Computer Science from Cornell University.

Wolfgang Gentzsch, DEISA & Open Grid Forum

Prof. Wolfgang Gentzsch is Dissemination Advisor for the DEISA Distributed European Infrastructure for Supercomputing Applications, and a member at large of the board of directors of the Open Grid Forum. Until recently, he was an adjunct professor of computer science at Duke University in Durham, and a visiting scientist at RENCI Renaissance Computing Institute at UNC Chapel Hill, both in North Carolina.

From 2005 to 2008, Dr. Gentzsch was the Chairman of the German D-Grid Initiative; Vice Chair of the e-Infrastructure Reflection Group e-IRG; Area Director of Major Grid Projects of the OGF Open Grid Forum Steering Group; and a member of the U.S. President’s Council of Advisors for Science and Technology (PCAST).

Before, he was Managing Director of the MCNC Grid and Data Center Services in North Carolina; Sun’s Senior Director of Grid Computing in Menlo Park, CA; President, CEO, and CTO of HPC software companies Genias and Gridware, and a professor of mathematics and computer science at the University of Applied Sciences in Regensburg, Germany. Wolfgang Gentzsch studied mathematics and physics at the Technical Universities in Aachen and Darmstadt, Germany.

Saul Greenberg, University of Calgary

Saul Greenberg is a Full Professor in the Department of Computer Science at the University of Calgary. He holds the NSERC/iCORE/Smart Technologies Industrial Chair in Interactive Technologies, and a University Professorship – a distinguished University of Calgary award recognizing research excellence. He received the CHCCS Achievement award in May 2007, and was also elected to the prestigious ACM CHI Academy in April 2005 for his overall contributions to the field of Human Computer Interaction.

While he is a computer scientist by training, the work by Saul and his talented students typify the cross-discipline aspects of Human Computer Interaction, Computer Supported Cooperative Work, and Ubiquitous Computing. His many research contributions are bound by the common thread of situated interaction, which considers how computer technology fits within the fabric of people’s day to day activities. This includes how such technology blends naturally in the flow of people’s work practices, how people socialize and work together through technology, and how that technology fits within people’s physical environment.

Dr. Greenberg is a prolific author (he is listed as the sixth most frequent author in the HCI Bibliography) with a high impact factor (his uncorrected H-number is 45). He has authored and edited several books and published many refereed articles. He is also known for his strong commitment in making his tools, systems, and educational material readily available to other researchers and educators.

Jeffrey Heer, Stanford University

Jeffrey Heer is an Assistant Professor of Computer Science at Stanford University, where he works on human-computer interaction, visualization, and social computing. His research investigates the perceptual, cognitive, and social factors involved in making sense of large data collections, resulting in new interactive systems for visual analysis and communication.

Heer’s work has produced novel visualization techniques for exploring data, software tools that simplify visualization creation and customization, and collaborative analysis systems that leverage the insights of multiple analysts. He has also led the design of the Prefuse, Flare, and Protovis open-source visualization toolkits, which have been downloaded over 100,000 times; cited in over 500 research publications; and used by researchers, corporations, and thousands of data enthusiasts.

Heer received his B.S., M.S., and Ph.D. degrees in Computer Science from the University of California, Berkeley. Over the years, he has also worked at a number of research laboratories and corporations, including Xerox PARC, IBM Research, Microsoft Research, and Tableau Software. Heer is the recipient of the 2009 ACM CHI Best Paper Award, Faculty Awards from IBM and Intel, UC Berkeley’s C.V. Ramamoorthy Distinguished Research Award, and in 2009 was named to MIT Technology Review’s TR35, a list recognizing 35 innovators under the age of 35.

Tony Hey, Microsoft Research

As Corporate Vice President of the External Research Division of Microsoft Research, Tony Hey is responsible for the worldwide external research and technical computing strategy across Microsoft Corporation. He leads the company’s efforts to build long-term public-private partnerships with global scientific and engineering communities, spanning broad reach and in-depth engagements with academic and research institutions, related government agencies and industry partners. His responsibilities also include working with internal Microsoft groups to build future technologies and products that will transform computing for scientific and engineering research. Hey also oversees Microsoft Research’s efforts to enhance the quality of higher education around the world.

Before joining Microsoft, Hey served as director of the U.K.’s e-Science Initiative, managing the government’s efforts to provide scientists and researchers with access to key computing technologies. Before leading this initiative, Hey worked as Head of the School of Electronics and Computer Science; and, Dean of Engineering and Applied Science at the University of Southampton, where he helped build the department into one of the most respected computer science research institutions in England.

His research interests focus on parallel programming for parallel systems built from mainstream commodity components. With Jack Dongarra, Rolf Hempel and David Walker, he wrote the first draft of a specification for a new message-passing standard called MPI. This initiated the process that led to the successful MPI standard of today.

Hey is a fellow of the U.K.’s Royal Academy of Engineering. He also has served on several national committees in the U.K., including committees of the U.K. Department of Trade and Industry and the Office of Science and Technology. He was a member of the British Computer Society, the Institute of Engineering and Technology, and the Institute of Physics.

Tony Hey also has a passionate interest in communicating the excitement of science to young people. He has written ‘popular’ books on quantum mechanics and on relativity.

Hey is a graduate of Oxford University, with both an undergraduate degree in physics and a doctorate in theoretical physics.

Ken Hinkley, Microsoft Research

Ken Hinckley is a research scientist at Microsoft Research. He has published widely on input devices and interaction techniques. The basic thrust of his research is to enhance the input vocabulary that one can express using common computational devices and user interfaces.

Scott Hudson, Carnegie Mellon University

Scott Hudson is a professor in the Human-Computer Interaction Institute within the School of Computer Science at Carnegie Mellon University where he was until recently the founding director of the HCII PhD program. Elected to the CHI Academy in 2006, he has published extensively on technology-oriented HCI topics. He has regularly served on program committees for the ACM SIGCHI and UIST conferences, and served as the papers co-chair for the SIGCHI 2009 and 2010 conferences.

Galen Hunt, Microsoft Research

Galen Hunt is Principal Researcher of the Microsoft Research Operating Systems Group. He joined Microsoft Research in 1997, where he has stayed, except for a two and one-half-year sabbatical, in the Windows Server Division. Galen’s more successful past efforts include Menlo, Singularity, Detours, and the first prototype of Windows Media Player and its networking protocols. Galen’s less successful efforts include graduate work in Distributed Shared Memory and building a distributed version of the CLR in 1999. He has shipped bugs in Windows 2000, Windows XP, Windows Server 2003, and Windows Automated Deployment Services. Galen holds a PhD in Computer Science from the University of Rochester (where he contributed to GCC 2.1), a BS in Physics from the University of Utah (where he contributed to Linux 0.11), and more than 50 patents. Before graduate school, Galen worked at a startup firm reverse engineering file formats for tax programs. Aside from systems research, Galen enjoys spending time with his wife, daughter, and son.

Nadine Kano, Microsoft

Nadine Kano is a 21-year Microsoft veteran with a broad technical, business development, and communications background. She is a senior director of Solution Management in Microsoft’s Information Technology Group, working with a team responsible for driving adoption of Windows Azure, Microsoft’s cloud computing platform. Prior to joining Microsoft IT, she served as Senior Director of Product Marketing in Microsoft’s OEM Division, responsible for co-marketing programs to promote sales of new PCs. From 2000 through 2006, Kano served as the Director of Executive Communications for the head of the Windows Division, where she formalized a relationship with James Cameron to build the “Gaia” Digital Asset Management system used in making Avatar. Kano’s other roles at Microsoft have included software development, technical evangelism, event marketing, and business development. Kano graduated from Princeton University with a BSE in Computer Science in 1989 and received her MBA from Stanford University while working part-time for Microsoft in 1998.

Michael Keller, Stanford University

Michael A. Keller is Stanford’s University Librarian, director of Academic Information Resources, founder/publisher of HighWire Press, and publisher of the Stanford University Press.

Educated at Hamilton College (biology and music), SUNY/Buffalo (musicology), and SUNY/Geneseo (librarianship), he has led libraries at Cornell, UC/Berkeley, Yale, and Stanford. Keller’s board service includes Hamilton College, Long Now Foundation, Bibliotheca Alexandrina, Japan’s National Institute for Informatics, and National Library of China. Keller is a guest professor at the Chinese Academy of Sciences, Senior Presidential Fellow of the Council on Library and Information Resources, 2008 Fellow of the American Association for the Advancement of Science, and 2010 Fellow of the American Academy of Arts and Sciences.

He has served as advisor and consultant to numerous scientific and scholarly societies; the city of Ferrara, Italy; Newsweek magazine; Princeton and Indiana Universities; the National Library of China; the National Institute for Informatics of Japan; the Library at King Abdullah University of Science and Technology; and the National Library of Israel. During his watch at Stanford, numerous innovative exploitations based on the cascade of information technology innovations and the Internet arose and are still flourishing including: HighWire Press; LOCKSS/CLOCKSS; CourseWork (Sakai); the GATT Digital Archive; the Stanford Digital Repository; and the Matthew Parker Online Library project at Corpus Christi College, Cambridge University.

He was a founder, as well as president and chairman, of the Digital Library Federation. Keller is Stanford’s principal on the Google Book Search project. He delivered a Siemens Stiftung Lecture in March of 2008 entitled, “The Future of Books, Libraries, and Publishing.”

Yousef A. Khalidi, Microsoft

Yousef Khalidi is a distinguished engineer in the Windows Azure team. Windows Azure is a platform for developing, deploying, managing, and hosting cloud-based Web services. Khalidi is responsible for several aspects of the platform, centered on the goal of building a low-cost, automated, large-scale computing system, using commodity hardware, with efficiently managed shared resources. Before Windows Azure, Khalidi led an advanced development team in Windows that tackled a number of related operating system areas, including application model, resource management, and isolation. He also served as a member of the Windows Core Architecture group.

Before joining Microsoft, Khalidi was a distinguished engineer at Sun Microsystems. During his 14 years tenure at Sun, he held several development, architecture, and management positions in Sun’s software division as well as in Sun Labs. Khalidi was Chief Technology Officer and Chief Architect of Solaris, Chief Architect and Director of the Sun Cluster product line, Chief Architect of Sun’s N1 utility computing initiative, as well as a principal architect of Solaris MC and Spring operating systems. He shipped several releases of Sun Cluster and the Solaris operating system, and hasworked on system management software, high speed networking, and memory management hardware designs.

Khalidi has published works in several areas, including operating systems, high availability, distributed systems, object-oriented software, high speed networking, memory management, and computer architecture. He holds 30 patents in these areas. Khalidi has a Ph.D. and a Master of Science in Information and Computer Science from Georgia Institute of Technology (Georgia Tech) in Atlanta, Georgia.

James Larus, Microsoft Research

James Larus is Director of Research and Strategy for the eXtreme Computing Group (XCG) in Microsoft Research.

Larus has been an active contributor to the programming languages, compiler, and computer architecture communities. He has published many papers and served on numerous program committees and NSF and NRC panels. Larus became an ACM Fellow in 2006.

Larus joined Microsoft Research as a Senior Researcher in 1998 to start and, for five years, led the Software Productivity Tools (SPT) group, which developed and applied a variety of innovative techniques in static program analysis and constructed tools that found defects (bugs) in software. This group’s research has both had considerable impact on the research community, as well as being shipped in Microsoft products such as the Static Driver Verifier and FX/Cop and other, widely-used internal software development tools. Larus then became the Research Area Manager for programming languages and tools and started the Singularity research project, which demonstrated that modern programming languages and software engineering techniques could fundamentally improve software architectures.

Before joining Microsoft, Larus was an Assistant and Associate Professor of Computer Science at the University of Wisconsin-Madison, where he published approximately 60 research papers and co-led the Wisconsin Wind Tunnel (WWT) research project with Professors Mark Hill and David Wood. WWT was a DARPA and NSF-funded project investigated new approaches to simulating, building, and programming parallel shared-memory computers. Larus’s research spanned a number of areas: including new and efficient techniques for measuring and recording executing programs’ behavior, tools for analyzing and manipulating compiled and linked programs, programming languages for parallel computing, tools for verifying program correctness, and techniques for compiler analysis and optimization.

Larus received his MS and PhD in Computer Science from the University of California, Berkeley in 1989, and an AB in Applied Mathematics from Harvard in 1980. At Berkeley, Larus developed one of the first systems to analyze Lisp programs and determine how to best execute them on a parallel computer.

Ed Lazowska, University of Washington

Ed Lazowska holds the Bill & Melinda Gates Chair in Computer Science & Engineering at the University of Washington. Lazowska’s research and teaching concern the design, implementation, and analysis of high-performance computing and communication systems. Current work includes directing the University of Washington eScience Institute, and chairing the Computing Community Consortium. In 2003 he co-chaired the President’s Information Technology Advisory Committee. In 2005 he received the ACM Presidential Award “for showing us how to advocate effectively for IT research and advanced education” and in 2009 the ACM Distinguished Service Award “for more than two decades of wide-ranging and tireless service to the computing community, especially in advocacy at a national level.”

Johnny Lee, Microsoft

Johnny Chung Lee is a researcher in the Microsoft Applied Sciences group and explores novel input and output devices that can improve interaction with computing technology. His main responsibilities include advising the direction of existing hardware product lines and developing prototypes that may be developed into new products.

Lee joined Microsoft in June 2008 after graduating with a doctoral degree in human-computer interaction from Carnegie Mellon University. His research work spans a variety of topics including projection technology, multitouch input, augmented reality, brain-computer interfaces, and haptics. Lee is best known for his video tutorials on using the Nintendo Wii remote to create low-cost whiteboards and virtual reality displays, which have garnered more than 10 million views. In 2008, he was named to the prestigious TR35 list presented by Technology Review magazine to recognize the top 35 researchers in the world under the age of 35.

Ben Liblit, University of Wisconsin–Madison

Ben Liblit is an Assistant Professor in the Computer Sciences Department of the University of Wisconsin–Madison. Professor Liblit’s research interests include programming languages and software engineering generally, with particular emphasis on combining machine learning with static and dynamic analysis for program understanding and debugging.

Professor Liblit worked as a professional software engineer for four years before beginning graduate study. His experience has inspired a research style that emphasizes practical, best-effort techniques that cope with the ugly complexities of real-world software development. Professor Liblit completed his PhD in 2004 at UC Berkeley with advisor Alex Aiken, and received the 2005 ACM Doctoral Dissertation Award for his work on post-deployment statistical debugging.

Jimmy Lin, University of Maryland–College Park

Jimmy Lin is an associate professor in the iSchool at the University of Maryland and directs the newly-formed Cloud Computing Center there. Dr. Lin’s research lies at the intersection of information retrieval and natural language processing. He graduated with a Ph.D. in Electrical Engineering and Computer Science from MIT in 2004.

Steve Lucco, Microsoft

Steven E. Lucco is a Distinguished Engineer who co-founded the Microsoft Connected Systems Division and who has played a key role in developing the technical strategy and architecture for Microsoft’s application platform, including the Windows Communication Foundation, which shipped in 2006. He continues to work within CSD on the goal of making distributed systems programming tractable.

As a development lead and architect at Microsoft, Lucco built a team that designed and prototyped a secure, common-language runtime. This team joined with the JavaVM team at Microsoft and played a key role in the design and implementation of the Microsoft Common Language Runtime. As a senior researcher at Microsoft, Lucco published widely-cited articles on virtual machine design and implementation.

Before he joined Microsoft, Lucco co-founded Colusa Software in 1993 and served as its technical leader until it was acquired by Microsoft in 1996. Colusa shipped to customers including Tandem and IBM a cross-language virtual machine that provided secure execution of both strongly-typed (Java, C#) and weakly-typed (C++) programs.

Lucco received a PhD from UC Berkeley in 1994 where he pursued research interests in programming language design and implementation, natural language understanding, and runtime systems for multi-core processors. Lucco has also pursued these interests as an undergraduate at Yale, a researcher at Bell Labs and a faculty member at Carnegie Mellon University.

Liz Lyon, UKOLN/University of Bath UK

Dr Liz Lyon is the Director of UKOLN at the University of Bath UK, where she leads work to promote synergies between digital libraries and open science environments. She is Associate Director of the UK Digital Curation Centre, in which UKOLN is a partner. She is also author of a number of major direction-setting reports, including Open Science at Web-Scale: Optimising Participation and Predictive Potential (2009), Scaling Up (2008), and Dealing with Data (2007). These reports have been informed by a series of pioneering research data management projects: eBank UK, eCrystals Federation, and Infrastructure for Integration in Structural Sciences (I2S2), all of which have explored links between research data, scholarly communications, and learning in the chemical crystallography domain. Liz Lyon has a doctorate in cellular biochemistry and has also worked in various university libraries.

Matt Madden, Avatar

As Vice President of Production and Development for Giant Studios, Matt Madden is responsible for all aspects of the production pipeline, a responsibility he has had since co-launching the studio in 1999. Widely recognized and acknowledged to be the industry leader in Performance Capture-based technology and Virtual Production services, Madden and his team were recognized in 2005 with a Technical Achievement Academy Award. He is currently in production on DreamWorks Studios’ Real Steel as virtual production supervisor. Madden most recently served as Performance Capture supervisor on Avatar and digital production supervisor on the upcoming Steven Spielberg/Peter Jackson feature Tintin. Other credits as motion capture supervisor include The Polar Express, Night at the Museum, The Day After Tomorrow, The Chronicles of Narnia: Prince Caspian, Iron Man, The Incredible Hulk, and King Kon. Madden began his long-time association with director Peter Jackson while serving as motion capture consultant on The Lord of the Rings trilogy.

Tara McPherson, University of Southern California

Tara McPherson is associate professor of Gender and Critical Studies at the University of Southern California’s School of Cinematic Arts. Her Reconstructing Dixie: Race, Gender and Nostalgia in the Imagined South (Duke UP: 2003) received the 2004 John G. Cawelti Award for the outstanding book published on American Culture, among other awards. She is co-editor of Hop on Pop: The Politics and Pleasures of Popular Culture (Duke UP: 2003) and editor of Digital Youth, Innovation and the Unexpected, part of the MacArthur Foundation series on Digital Media and Learning (MIT Press, 2008.) Her writing has appeared in numerous journals, including Camera Obscura, The Velvet Light Trap, Discourse, and Screen, and in edited anthologies such as Race and Cyberspace, The New Media Book, The Object Reader, Virtual Publics, The Visual Culture Reader 2.0, and Basketball Jones. The anthology, Interactive Frictions, co-edited with Marsha Kinder, is forthcoming from the University of California Press, and she is currently working on a manuscript on the cultural and racial logics of code. Her new media research focuses on issues of convergence, gender, and race, as well as upon the development of new tools and paradigms for digital publishing, learning, and authorship.

She is the founding editor of Vectors a multimedia peer-reviewed journal affiliated with the Open Humanities Press, and is one of three editors for the new MacArthur-supported International Journal of Learning and Media (launched by MIT Press in 2009.) Tara was among the founding organizers of Race in Digital Space, a multi-year project supported by the Annenberg Center for Communication and the Ford and Rockefeller Foundations. She is on the advisory board of the Mellon-funded Scholarly Communications Institute, is a member of the Academic Advisory Board of The Academy of Television Arts and Sciences Archives, has frequently served as an AFI juror, is a core board member of HASTAC , and is on the boards of several journals. At USC, she co-directs (with Phil Ethington) the new Center for Transformative Scholarship and is a fellow at the Center for Excellence in Teaching. With major support from the Mellon Foundation, she is currently working with colleagues from Brown, NYU, Rochester, and UC San Diego and with several academic presses and archives to explore new modes of scholarship for visual culture research.

Michael Medlock, Microsoft

Michael Medlock is a senior experience researcher in the Entertainment Experience Group at Microsoft. He has 13 years of industry experience researching the commercialization of cutting edge technology into products. He has worked on more than 60 game titles, about half of which have not seen the light of day. Of the ones that have shipped, a number have been successful Xbox and PC games, such as Project Gotham Racing, Age of Empires II, Dungeon Siege, Crimson Skies, Flight Simulator, and Top Spin. He has also worked on UI’s for Kinect (Air Gesture + Voice), Windows Phone 6.0–7.0 (Touch), Internet Explorer, and internal HR business systems for Microsoft. He has documented and evangelized the Rapid Iterative Testing and Evaluation method (RITE), which is used broadly in the usability community and is currently being taught in a number of HCI programs around the world.

Simon Mercer, Microsoft Research

Dr. Mercer has a background in zoology and has worked in various aspects of bioinformatics over the years. Having managed the development of the Canadian Bioinformatics Resource, a national life science service-provision network in Canada, he worked as Director of Software Engineering at Gene Codes Corporation before moving to the External Research team of Microsoft Research. In his current role as Director of Health and Wellbeing, he manages collaborations between Microsoft and academia in the area of healthcare research. Dr. Mercer’s interests include bioinformatics, translational medicine, and the management of scientific data.

William Michener, University of New Mexico

William Michener is professor and director of e-Science Initiatives for University Libraries at the University of New Mexico. He has authored four books related to ecological informatics and more than 70 journal articles and book chapters. He is a Certified Senior Ecologist and serves as editor of Ecological Archives and associate editor of the International Journal of Ecological Informatics. He has directed several large interdisciplinary research programs including the National Science Foundation’s (NSF) Biocomplexity Program, the Development Program for the U.S. Long-Term Ecological Research Network, and numerous cyberinfrastructure research and development projects. His current efforts focus on developing information technologies for the biological, ecological, and environmental sciences through DataONE—a large, multi-institutional, international research project funded by NSF.

Natasa Milic-Frayling, Microsoft Research

As principal researcher at Microsoft Research Cambridge (MSRC), Natasa Milic-Frayling is setting research directions for Integrated Systems group, focusing on the design, prototyping, and evaluation of information and communication systems and services. She also serves as Director of Research Partnership with industry, the MSRC programme that she started in 2004 to facilitate collaboration between Microsoft Research, Microsoft teams across Europe, the Middle East, and Asia; and Microsoft clients and partners.

The programme reaches out to leading industry organizations who wish to exchange knowledge and experience in specific areas of research and collaborate on tackling strategic problems. As a result MSRC works with consortia of partners on the EU sponsored project PLANETS, focused on long term preservation of digital content, and the CFMS programme looking at rich context of engineering workflows and ways to capture and disseminate knowledge and best practices

Natasa joined Microsoft Research Cambridge, UK in 1998. She received her Doctorate in Applied Mathematics from Carnegie Mellon University, Pittsburgh, PA in 1988. Prior to joining Microsoft Research, she worked at Claritech Corporation (currently JustSystems Evans Research), a spin-off company from Carnegie-Mellon University, acquired by the Justsystem Corporation of Japan in 1996. There she served as Director of Research.

Natasa is actively involved with a wider research and academic community. She is publishing at academic conferences, co-organizing academic and industry events, and promoting research and innovation through public speaking and engagements.

Ade Miller, Microsoft

Ade Miller is currently the development manager for Microsoft’s patterns & practices group (p&p) where he manages several agile teams. His primary interests are parallel computing and in improving the way people develop software. He is current writing a book on design patterns for parallel programming.

Prior to leading the p&p development team, Ade led the development of the p&p Web Services Software Factory: Modeling Edition. Before joining p&p, he was a developer and then a development lead on Visual Studio Tools for Office.

Prior to joining Microsoft, Ade worked on a variety of interesting projects including a web start-up, embedded languages, and High Performance Computing (HPC). Ade is a regular speaker and also blogs and writes about his experiences. Ade received his BSc and PhD in Physics from the University of Southampton, UK.

Dan Morris, Microsoft Research

Dan Morris is a researcher in the Computational User Experiences group at Microsoft Research; his work focuses on novel input devices, patient-facing technology for medical environments, and computer support for music and creativity.

Morris studied neurobiology as an undergraduate at Brown University, and developed brain-computer interfaces first at Brown and later as an engineer at Cyberkinetics, Inc. His PhD thesis at Stanford University focused on haptic rendering and physical simulation for virtual surgery, and his work since coming to Microsoft Research (in 2006) has included using physiological signals for input systems, generating automatic accompaniment for sung melodies, and designing patient-centric information displays for hospitals.

James L. Mullins, Purdue University

James L. Mullins has been Dean of Libraries and Professor of Library Science since 2004 when he came from the Massachusetts Institute of Technology (MIT) Libraries where he was associate director for administration. His more than 30-year career includes administrative positions at Villanova University and Indiana University. He earned BA and MALS degrees from the University of Iowa and the PhD from Indiana University.

Dr. Mullins has served in leadership positions within the American Library Association (ALA) and the Association of Research Libraries (ARL), and presently is an elected member of the ARL board of directors and is chair of the e-Science Working Group. Presently, he serves on the editorial board of College and Research Libraries, the premier journal in the field. He is also on the board of directors of the International Association of Scientific and Technological University Libraries (IATUL), Center for Research Libraries (CRL), and a delegate to the Science and Technology Section of the International Federation of Library Associations (IFLA). Purdue was host to the 2010 IATUL Conference in June which focused on the role of libraries in e-science. He was a signatory to the formation of DataCite in December 2009, the international consortium to assign digital object identifiers (DOI) to datasets for citation.

Dr. Mullins is a frequent contributor to the professional literature, speaks at national and international conferences, and consults with research libraries and universities internationally on challenges facing research communication and dissemination. He has served on NSF panels including one in 2006 recommending that data management plans be required for NSF research funding.

Moni Naor, Weizmann Institute of Science

Moni Naor is a professor of computer science at the Weizmann Institute of Science in Rehovot, Israel. He was born, raised and educated in Haifa, Israel. He received his B.A. in computer science from the Technion, Israel Institute of Technology, in 1985, and his Ph.D. in computer science from the University of California at Berkeley in 1989. He has been with the IBM Almaden Research Center from 1989 to 1993. In 1993, he joined the Department of Computer Science and Applied Math of the Weizmann Institute of Science, where he is currently the Judith Kleeman Professorial chair.

He works in various fields of computer science, mainly the foundations of cryptography. He was named an IACR fellow in 2008.

His main research interests include Cryptography, Computational and Concrete Complexity.

Michael L. Nelson, Old Dominion University

Michael L. Nelson is an associate professor of computer science at Old Dominion University. Prior to joining ODU, he worked at NASA Langley Research Center from 1991 to 2002. He is a co-editor of the OAI-PMH and OAI-ORE specifications and is a 2007 recipient of an NSF CAREER award. He has developed many digital libraries, including the NASA Technical Report Server. His research interests include repository-object interaction and alternative approaches to digital preservation.

Ed Nightingale, Microsoft Research

Ed Nightingale is a researcher on the operating systems group at Microsoft Research. He enjoys working on just about anything related to systems research. Lately, that has involved OS support for new memory technologies, such as Phase Change Memory, data mining, heterogeneous hardware architectures, and large-scale storage systems.

Andrew Phelps, Rochester Institute of Technology

Andrew Phelps is the Chair of the Department of Interactive Games & Media at the Rochester Institute of Technology in Rochester, New York. He is the co-founder of the Masters of Science in Game Design & Development within the B. Thomas Golisano College of Computing and Information Sciences, as well as the Bachelors of the same name, and his work in games education has been featured in The New York Times, CNN.com, USA Today, National Public Radio, IEEE Computer, and several other articles and periodicals. He regularly publishes academic work exploring collaborative game engines and game engine technology. As a professor at the institute, he teaches courses in multimedia programming, game engine development, 2D and 3D graphics, media design, and interactivity. His primary research interests include online games, electronic entertainment, three-dimensional graphics and real time rendering, virtual reality, and interactive worlds. Read more about the department.

Rick Rashid, Microsoft Research

As senior vice president, Richard (Rick) F. Rashid oversees worldwide operations for Microsoft Research, an organization encompassing more than 850 researchers across six labs worldwide. Under Rashid’s leadership, Microsoft Research conducts both basic and applied research across disciplines that include algorithms and theory; human-computer interaction; machine learning; multimedia and graphics; search; security; social computing; and systems, architecture, mobility and networking. His team collaborates with the world’s foremost researchers in academia, industry and government on initiatives to advance the state-of-the-art of computing and to help ensure the future of Microsoft’s products.

After joining Microsoft in September 1991, Rashid served as director and vice president of the Microsoft Research division and was promoted to his current role in 2000. In his earlier roles, Rashid led research efforts on operating systems, networking and multiprocessors, and authored patents in such areas as data compression, networking and operating systems. He managed projects that catalyzed the development of Microsoft’s interactive TV system and also directed Microsoft’s first e-commerce group. Rashid was the driving force behind the creation of the team that later developed into Microsoft’s Digital Media Division.

Before joining Microsoft, Rashid was professor of computer science at Carnegie Mellon University (CMU). As a faculty member, he directed the design and implementation of several influential network operating systems and published extensively about computer vision, operating systems, network protocols and communications security. During his tenure, Rashid developed the Mach multiprocessor operating system, which has been influential in the design of modern operating systems and remains at the core of several commercial systems.

Rashid’s research interests have focused on artificial intelligence, operating systems, networking and multiprocessors. He has participated in the design and implementation of the University of Rochester’s Rochester Intelligent Gateway operating system, the Rochester Virtual Terminal Management System, the CMU Distributed Sensor Network Testbed, and CMU’s SPICE distributed personal computing environment. He also co-developed of one of the earliest networked computer games, “Alto Trek,” during the mid-1970s.

Rashid was inducted into the National Academy of Engineering in 2003 and presented with the Institute of Electrical and Electronics Engineers Emanuel R. Piore Award and the SIGOPS Hall of Fame Award in 2008. He was also inducted into the American Academy of Arts & Sciences in 2008. In addition, Rashid is a member of the National Science Foundation Computer Directorate Advisory Committee and a past member of the Defense Advanced Research Projects Agency UNIX Steering Committee and the Computer Science Network Executive Committee. He is also a former chairman of the Association for Computing Machinery Software System Awards Committee.

Rashid received master of science (1977) and doctoral (1980) degrees in computer science from the University of Rochester. He graduated with honors in mathematics and comparative literature from Stanford University in 1974.

Amit Ray, Rochester Institute of Technology

Amit Ray is Associate Professor in the Department of English, College of Liberal Arts, at the Rochester Institute of Technology. He received his Ph.D. in Language and Literature from the University of Michigan, Ann Arbor, where he specialized in postcolonial studies, working primarily with Simon Gikandi and Aamir Mufti. Dr. Ray’s first book, Negotiating the Modern (Routledge, 2007) explores the development of South Asian Orientalism and its impact upon European and Indian modernity.

He began working with wikis in 2004, initially examining their pedagogical possibilities in his teaching. He is currently working on a book project entitled “Writing Babel: Wikis, Authorship and Authority in the Public Sphere” that explores how the collaborative authoring environment of wikis impact the public sphere by challenging long-standing notions of authorship, authority, credit, and expertise. In particular, Dr. Ray investigates how distributed models of textuality present alternatives to copyright and proprietary media models, test government and corporate secrecy, provide for new models of distributed expertise, and generate novel opportunities for cross-cultural, trans-linguistic translation, dialogue, and debate.

George Robertson, Microsoft Research

George Robertson is an ACM Fellow, a member of the CHI Academy, and a principal researcher at Microsoft Research, where he leads an information visualization research group. Before coming to Microsoft, he was a principal scientist at Xerox PARC, working on 3D interactive animation interfaces for intelligent information access. He was the architect of the Information Visualizer. He has also been a senior scientist at Thinking Machines, a senior scientist at Bolt Beranek and Newman, and a faculty member of the Computer Science Department at Carnegie-Mellon University. In the past, he has made significant contributions to machine learning, multimedia message systems, hypertext systems, operating systems, and programming languages. Robertson serves on the advisory board of the Department of Homeland Security National Visualization and Analytics Center. He is an associate editor for the journal, Information Visualization. He served on the Information Visualization Steering Committee from 1995 to 2009. He chaired UIST’97 and InfoVis 2004.

Alexander I. Rudnicky, Carnegie Mellon University

Dr. Rudnicky’s research has spanned many aspects of spoken language, including knowledge-based recognition systems, language modeling, architectures for spoken language systems, multi-modal interaction, the design of speech interfaces, and the rapid prototyping of speech-to-speech translation systems. Dr. Rudnicky has been active in research into spoken dialog, and has made contributions to dialog management, language generation, and the computation of confidence metrics for recognition and understanding. His recent interests include the automatic creation of summaries from event streams, automated meeting understanding and summarization, and language-based human-robot communication. Dr. Rudnicky is currently a principal systems scientist in the Computer Science Department at Carnegie Mellon University and is on the faculty of its Language Technologies Institute.

Roland Saekow, University of California–Berkeley

Roland Saekow received his bachelor’s degree in Science, Technology, and Society through the Interdisciplinary Field Studies program at the University of California, Berkeley. His thesis, titled The Transforming Role of Timelines in the Study of History, investigated the educational utility of online timelines against their traditional counterparts found in textbooks.

At Berkeley, Roland led the student-run product design team called “Berkeley Innovation.” Proudly wearing the team’s bright orange shirts (a color chosen to represent new ideas), he and his teammates sought out ways of improving student life through creativity techniques, brainstorming sessions and rapid prototyping. He was primarily involved with the eFlyer project, which used LCD panels on campus to provide information to students digitally without waste.

He also taught a course on waste management, titled “The Joy of Garbage,” through the student-run democratic education program. The course investigated everything from the history of garbage, to its current state around the world, to the future of waste management.

He is presently working on bringing the ChronoZoom project to life: An online, zoomable timeline that aims to make time relationships between different studies of history clear and vivid—everything from the big bang to the present.

Lucy Sanders, National Center for Women & Information Technology

Lucy Sanders is CEO and co-founder of the National Center for Women & Information Technology and also serves as Executive-in-Residence for the ATLAS Institute at the University of Colorado at Boulder.

She has more than 20 years of experience in industry, having worked in R&D and executive positions at AT&T Bell Labs, Lucent Bell Labs, and Avaya Labs, where she specialized in systems-level software and solutions (multi-media communication and customer relationship management.) In 1996, Lucy was awarded the Bell Labs Fellow Award, the highest technical accomplishment bestowed at the company, and she has six patents in the communications technology area.

Lucy serves on several boards, including the Mathematical Sciences Research Institute (MSRI) Board of Trustees at the University of California at Berkeley; the Engineering Advisory Council at the University of Colorado at Boulder; the National Girls Collaborative Project Advisory Board; the Advisory Board for the Women’s College Applied Computing Program at the University of Denver; the ATLAS Advisory Board; and several corporate boards. She is a member of the ACM nominating committee and the ACM-W Advisory Board.

In 2004, Lucy was awarded the Distinguished Alumni Award from the Department of Engineering at CU and in 2007, she was inducted into the Women in Technology International (WITI) Hall of Fame. Lucy has served as Conference Chair and Program Chair for the Grace Hopper Celebration of Women in Computing, as well as the Information Technology Research and Development Ecosystem Commission for the National Academies. In 2009, she was recognized as a Microsoft Community Partner.

Lucy received her BS and MS in Computer Science from Louisiana State University and the University of Colorado at Boulder, respectively.

Jesse Schell, Carnegie Mellon University

Prior to starting Schell Games in 2004, Jesse was the Creative Director of the Disney Imagineering Virtual Reality Studio, where he worked and played for seven years as designer, programmer, and manager on several projects for Disney theme parks and DisneyQuest, as well as on Toontown Online, the first massively multiplayer game for kids. Before that, he worked as writer, director, performer, juggler, comedian, and circus artist for both Freihofer’s Mime Circus and the Juggler’s Guild.

Jesse is also on the faculty of the Entertainment Technology Center at Carnegie Mellon University where he teaches classes in Game Design and serves as advisor on several innovative projects. Formerly the Chairman of the International Game Developers Association, he is also the author of the award winning book, The Art of Game Design: A Book of Lenses. In 2004, he was named one of the world’s Top 100 Young Innovators by Technology Review, MIT’s magazine of innovation. His primary responsibility at Schell Games is to make sure everyone is having fun and creating beautiful things.

Ben Shneiderman, University of Maryland

Ben Shneiderman is a professor in the Department of Computer Science and founding director (1983–2000) of the Human-Computer Interaction Laboratory at the University of Maryland. He was elected as a Fellow of the Association for Computing (ACM) in 1997, a Fellow of the American Association for the Advancement of Science (AAAS) in 2001, and a Member of the National Academy of Engineering in 2010. He received the ACM SIGCHI Lifetime Achievement Award in 2001.

Ben is the co-author with Catherine Plaisant of Designing the User Interface: Strategies for Effective Human-Computer Interaction (5th ed., 2010). With Stu Card and Jock Mackinlay, he co-authored Readings in Information Visualization: Using Vision to Think (1999). With Ben Bederson, he co-authored The Craft of Information Visualization (2003). His book Leonardo’s Laptop (2002, MIT Press) won the IEEE book award for Distinguished Literary Contribution. His latest book, co-authored with Derek Hansen and Marc A. Smith, is Analyzing Social Media Networks with NodeXL and will be published in August 2010.

Harry Shum, Microsoft Research

Harry Shum is the corporate vice president responsible for search product development at Microsoft Corporation. Previously, he oversaw the research activities at Microsoft Research Asia and the lab’s collaborations with universities in the Asia Pacific region, and was responsible for the Internet Services Research Center, an applied research organization dedicated to long-term and short-term technology investments in search and advertising at Microsoft.

Shum joined Microsoft Research in 1996, as a researcher based in Redmond, Washington. He moved to Beijing as one of the founding members of Microsoft Research China (later renamed Microsoft Research Asia). There he began a nine-year tenure as a research manager, subsequently moving on to become assistant managing director, managing director of Microsoft Research Asia, Distinguished Engineer and corporate vice president.

Shum is an Institute of Electrical and Electronics Engineers Fellow and an Association for Computing Machinery Fellow for his contributions on computer vision and computer graphics. He has published more than 100 papers about computer vision, computer graphics, pattern recognition, statistical learning and robotics. He holds more than 50 U.S. patents.

Shum received a doctorate in robotics from the School of Computer Science at Carnegie Mellon University in Pittsburgh. In his spare time he enjoys playing basketball, rooting for the Pittsburgh Steelers and spending time with his family.

Marc Smith, Connected Action

A sociologist specializing in the social organization of online communities and computer mediated interaction, Marc Smith founded and managed the Community Technologies Group at Microsoft Research in Redmond, Washington, and led the development of social media reporting and analysis tools for Telligent Systems. Smith leads the Connected Action consulting group and lives and works in Silicon Valley, California. Smith co-founded the Social Media Research Foundation, a non-profit devoted to open tools, data, and scholarship related to social media research.

Smith is the co-editor with Peter Kollock of Communities in Cyberspace (Routledge), a collection of essays exploring the ways identity; interaction and social order develop in online groups. Along with Derek Hansen and Ben Shneiderman, he is the co-author and editor of Analyzing Social Media Networks with NodeXL: Insights from a Connected World, forthcoming from Morgan-Kaufmann in July 2010, which is a guide to mapping connections created through computer-mediated interactions.

Smith’s research focuses on computer-mediated collective action: the ways group dynamics change when they take place in and through social cyberspaces. Many “groups” in cyberspace produce public goods and organize themselves in the form of a commons (find related papers). Smith’s goal is to visualize these social cyberspaces, mapping and measuring their structure, dynamics and life cycles. At Microsoft, he developed the “Netscan” web application and data mining engine that allows researchers studying Usenet newsgroups and related repositories of threaded conversations to get reports on the rates of posting, posters, crossposting, thread length, and frequency distributions of activity. Smith applied this work to the development of a generalized community analysis platform for Telligent, providing a web-based system for groups of all sizes to discuss and publish their material to the web and analyze the emergent trends that result. He contributes to the open and free NodeXL project that adds social network analysis features to the familiar Excel spreadsheet. A tutorial on social network analysis is evolving into a book and is freely available. NodeXL enables social network analysis of email, Twitter, Flickr, www, Facebook, and other network data sets.

The Connected Action consulting group applies social science methods in general and social network analysis techniques in particular to enterprise and Internet social media usage. SNA analysis of data from message boards, blogs, wikis, friend networks, and shared file systems can reveal insights into organizations and processes. Community managers can gain actionable insights into the volumes of community content created in their social media repositories. Mobile social software applications can visualize patterns of association that are otherwise invisible.

Smith received a B.S. in International Area Studies from Drexel University in Philadelphia in 1988, an M.Phil. in social theory from Cambridge University in 1990, and a Ph.D. in Sociology from UCLA in 2001. He is an affiliate faculty at the Department of Sociology at the University of Washington and the College of Information Studies at the University of Maryland. Smith is also a Distinguished Visiting Scholar at the Media-X Program at Stanford University.

Desney Tan, Microsoft Research

Desney is a senior researcher in the visualization and interaction area at Microsoft Research, where he manages the Computational User Experiences group in Redmond, Washington, as well as the Human-Computer Interaction group in Beijing, China. He also holds an affiliate faculty appointment in the Department of Computer Science and Engineering at the University of Washington. Desney’s research interests include human-computer interaction, physiological computing, and healthcare. However, over the years, he has worked on projects in many other domains. Desney received his Bachelor of Science in Computer Engineering from the University of Notre Dame in 1996, after which he spent a couple of years building bridges and blowing things up in the Singapore Armed Forces. He later attended Carnegie Mellon University, where he worked with Randy Pausch and earned his PhD in Computer Science in 2004. Desney was honored as one of MIT Technology Review’s 2007 Young Innovators Under 35 for his work on brain-computer interfaces. He was also named one of SciFi Channel’s Young Visionaries at TED 2009, as well as Forbes’ Revolutionaries: Radical Thinkers and their World-Changing Ideas for his work on Whole Body Computing. As if serving as Technical Program Chair for CHI 2008 wasn’t enough, he is now serving as General Chair for CHI 2011 in Vancouver, British Columbia.

Stewart Tansley, Microsoft Research

Stewart Tansley is a senior research program manager responsible for devices in Natural User Interactions in External Research at Microsoft Research. Before joining Microsoft in 2001, he spent 13 years in the telecommunications industry in software research and development, focusing on technology transfer. He has a PhD in Artificial Intelligence applied to Engineering from Loughborough University, UK. He has published a variety of papers on robotics, artificial intelligence and network management, several patents, and has co-authored a book on software engineering for AI applications. In 2009 he co-edited The Fourth Paradigm, a book collating visionary essays on the emerging field of data-intensive science.

Chuck Thacker, Microsoft Research

Chuck Thacker was fortunate to enter computing at a time when the fundamental electronic technologies had matured to the point that many of the predictions of the field’s pioneers could finally be achieved. Educated in Physics at the University of California at Berkeley, he joined the university’s project Genie in 1968. This project had constructed one of the most successful early timesharing computers, the SDS 940, and was planning a follow-on system when he joined the project.

The project became the Berkeley Computer Corporation, which developed the BCC 500 timesharing system. Here, he led the group designing the system’s central memory and microprocessor. Although not a commercial success, BCC supplied the core group of technologists for the newly-formed Computer Science Laboratory at the Xerox Palo Alto Research Center (PARC), which he joined in 1970.

During his 13 years at PARC, Chuck led the hardware development of most of the innovative systems that were developed at CSL. He was the project leader of the MAXC timesharing system, a PDP-10-equivalent that was one of the first systems to make use of semiconductor memory. He was the chief designer of the Alto, the first personal computer to use a bit-mapped display and mouse to provide a windowed user interface. He is a co-inventor of the Ethernet local area network, and contributed to many other projects, including the first laser printer and the Dorado, a high-performance ECL-technology personal workstation. He also designed and implemented the SIL CAD system, which was used by most PARC hardware designers throughout the ’70s. In the early ’80s, he was architect of the Dragon, a multiprocessor system that employed the first “snooping” cache.

In 1983, Chuck was a founder of the Digital Equipment Corporation’s Systems Research Center. Here he led the hardware development of the Firefly, the first multiprocessor workstation, and the Alpha Demonstration Unit, the first Alpha-architecture multiprocessor.

Chuck has also worked extensively in computer networking. He led the development of AN1, a local area network that used active switches and 100 Megabit-per-second point-to-point links to provide high aggregate performance. The follow-on project, AN2, also developed by his team, became the DEC Gigaswitch/ATM product.

He joined Microsoft in 1997 to help establish the company’s Cambridge, England, laboratory. After returning to the United States in 1999, he joined the newly-formed Tablet PC group and managed the design of the first prototypes of this new device. He then worked on a project to make computing more pervasive and effective in K-12 education. He is currently setting up a group at Microsoft Research in Silicon Valley to do computer architecture research.

Chuck has published extensively, and holds a number of U.S. patents in computer systems and networking. In 1984, he was awarded (with B. Lampson and R. Taylor) the ACM’s Software Systems Award for the development of the Alto. He is a Distinguished Alumnus of the Computer Science Department of the University of California, and holds an Honorary Doctorate from the Swiss Federal Institute of Technology (ETH). He is a member of the IEEE, a fellow of the ACM, a member of the American Academy of Arts and Sciences, and a member of the National Academy of Engineering, which awarded him (with Butler Lampson, Alan Kay, and Robert Taylor) the 2004 Charles Stark Draper prize for the developed of the first networked personal computers. In 2007, he was awarded the John Von Neumann medal by the IEEE.

Kudo Tsunoda, Microsoft

Having joined Microsoft Game Studios (MGS) two years ago, Kudo is the creative director on Project Natal and general manager of three internal studios. As creative director on Project Natal, he is responsible for the creative direction and cohesion of all elements of the Natal program. As general manager, he is working on nine different AAA titles developed both internally and externally.

Prior to working at MGS, Kudo was the general manager and executive producer of EA Chicago. He was responsible for establishing EA Chicago as a leading next-gen video game studio. In this role, he set the creative tone and production methodologies for all EA Chicago games including the hit franchise “EA Sports Fight Night.” In the last five years, the games Kudo worked on have won 14 game-of-the-year awards including the coveted Spike TV Video Game Award, “Golden Monkey.”

During his 13 years in the video game industry, Kudo has designed such genre defining features as “Fight Night’s” Total Punch Control system and the Army Men Air Attack Winch Mechanic. His games have grossed more than $1 billion dollars worldwide including several #1 selling products.

Kudo majored in philosophy at George Washington University in Washington, DC, where he was Harrison Hall champion in “Tecmo Bowl,” “Double Dribble,” and “Racquet Attack.” He is a former lion tamer and also an avid collector of butterflies.

Andy van Dam, Brown University

Andries van Dam (Andy) has been on Brown’s faculty since 1965, and was one of the Computer Science Department’s co-founders and its first Chairman, from 1979 to 1985. He was a Principal Investigator and was the Director from 1996 to 1998 in the NSF Science and Technology Center for Graphics and Visualization, a research consortium including Brown, Caltech, Cornell, North Carolina (Chapel Hill), and the University of Utah. He served as Brown’s first Vice President for Research from 2002 to 2006.

Professor van Dam received his B.S. degree with Honors in Engineering Sciences from Swarthmore College in 1960 and his M.S. and Ph.D. from the University of Pennsylvania in 1963 and 1966, respectively.

Catharine van Ingen, Microsoft Research

Catharine van Ingen is a partner software architect in the Microsoft Research eScience Group. Her research is focused on lowering the data entry barrier in the environmental sciences. One of her current projects is MODISAzure—a cloud service for computing water balance and carbon fixing across the continental United States by synthesizing TBs of satellite imagery, GBs of ground based sensor data, and KBs of direct field measurements. Catharine has a wealth of experience in hardware, including work with the Alpha machine and MIPS processor teams, industrial-strength software for algorithms used to manage water flows, logging data from particle accelerator detectors, and early Internet commerce software for purchasing Mickey Mouse watches. She holds a PhD in Civil Engineering from CalTech.

Surya Vanka, Microsoft

Surya Vanka is principal manager of user experience at Microsoft Corporation, and oversees best practices and engineering standards to create high-quality user experiences for Microsoft customers. He has worked as a designer and manager on several products during his 10 years at Microsoft. His mission is to put the users rather than technology at the center of the development process for all of Microsoft products. Prior to joining Microsoft, Surya was associate professor of design at University of Illinois at Urbana-Champaign, and Fellow at the prestigious Center for Advanced Study. He is the author of two books on design, has lectured on design in more than 20 countries, and is published widely. His work has appeared in numerous venues including Form, ID, Design Council, WIRED, Interactions, BBC, National Public Radio, and Channel 15 Television. He regularly speaks on interaction design, user experience, product development and strategic innovation; he also teaches courses and seminars on these subjects around the world.

Ricardo Vencio, Universidade de São Paulo

Professor Ricardo Vencio, physicist, coordinates the LabPIB laboratory in Brazil where they conduct research on subjects such as cancer, malaria, bio-rubber, and bio-fuels. After earning his PhD in Bioinformatics at USP, he spent two years in Seattle as a post-doc at the Institute for Systems Biology and returned to Brazil as Tenured Assistant Professor at University of São Paulo’s Medical School at Ribeirao Preto (FMRP-USP). His experience in Bioinformatics and Computational Systems Biology is documented by his publication records involving subjects ranging from gene expression (microarrays and sequencing) to Bayesian methods in Bioformatics. Last, but not least, he devotes his time to teaching and training students in Biomedical Informatics at FMRP.

Evelyne Viegas, Microsoft Research

Evelyne Viegas is responsible for the Data Intelligence initiative at Microsoft Research in Redmond, Washington. Data Intelligence is about interacting with data in rich, safe, and semantically meaningful ways, going beyond search to create the path from data to information and knowledge. At the center of this initiative is data seen as an enabler of innovation. Evelyne has been running programs which emphasized the value of data-driven research by providing safe access to assets to the academic research community which also allows repeatability of experimentation.

Prior to her present role, Evelyne has been working as a technical lead at Microsoft, delivering Natural Language Processing components to projects for MSN, Office, and Windows. Before Microsoft, and after completing her Doctorate in France, she worked as a Principal Investigator at the Computing Research Laboratory in New Mexico on an ontology-based Machine Translation project.

Celso von Randow, Instituto Nacional de Pesquisas Espaciais

Celso von Randow is a researcher of the Center for Earth System Science, in the Brazilian National Institute for Space Research. Since 1999, he has been developing research on Biosphere-Atmosphere Interactions in tropical biomes, focusing on the measurement of surface fluxes of carbon and water vapor by using the micrometeorological technique of eddy covariance. He obtained his PhD in Environmental Sciences at Wageningen University and Research Centre, the Netherlands, in 2007, with the thesis entitled “On turbulent exchange processes over Amazonian forest.”

Currently, he is collaborating with researchers from Microsoft Research, Johns Hopkins University, and University of São Paulo on a project to test the use of prototypes of environmental sensors (geosensors) in the Atlantic coastal and in the Amazonian rain forests in Brazil, forming sensor networks with high spatial and temporal resolution, and to develop software tools for data quality control and integration.

Telle Whitney, Anita Borg Institute for Women and Technology

Dr. Telle Whitney has served as President and CEO of Anita Borg Institute for Women and Technology since 2002. Whitney has 20 years of experience in the semiconductor and telecommunications industries. She has held senior technical management positions with Malleable Technologies (now PMC-Sierra) and Actel Corporation, and is a co-founder of the Grace Hopper Celebration of Women in Computing Conference.

Dr. Whitney served as the ACM Secretary/Treasurer in 2003–2004, and is currently co-chair of the ACM Distinguished Member committee. She was a member of the National Science Foundation CEOSE and CISE advisory committees, and is a co-founder of the National Center for Women and Information Technology (NCWIT). She serves on the advisory boards of Caltech’s Information Science and Technology (IST), California Institute for Telecommunications and Information Technology (CalIT2), and Illuminate Ventures.

Telle has received numerous awards including the ACM Distinguished Service Award, the Marie Pistilli Women in EDA Achievement Award, the Women’s Venture Fund Highest Leaf Award, and the San Jose Business Journal Top100 Women of Influence.

Dr. Whitney received her PhD from Caltech, and her bachelor’s degree at the University of Utah—both in Computer Science.

Telle is a runner, and lives in the Santa Cruz mountains. She makes jewelry in her not so spare time.

Daniel Wigdor, Microsoft

Daniel Wigdor is a user experience architect at Microsoft, and an affiliate assistant professor in both the Department of Computer Science & Engineering and the Information School at the University of Washington. His research interests are in human computer interaction, interactive computer graphics, and emerging post-WIMP user interfaces. He received a PhD in computer science at the DGP Lab of the University of Toronto with Prof. Ravin Balakrishnan, while working with Dr. Chia Shen at Mitsubishi Electric Research Labs.

Daniel is a recipient of the Wolfond Fellowship, and of an ACM Best Paper award. Before joining Microsoft, Daniel was a fellow at the Initiative in Innovative Computing at Harvard University, and co-founder of Iota Wireless, a startup dedicated to commercializing his research in mobile-phone text entry.

John Wilbanks, Science Commons

As Vice President of Science, John Wilbanks runs the Science Commons project at Creative Commons. He came to Creative Commons from a fellowship at the World Wide Web Consortium in Semantic Web for Life Sciences. Previously, he founded and led to acquisition Incellico, a bioinformatics company that built semantic graph networks for use in pharmaceutical research and development. Previously, John was the first assistant director at the Berkman Center for Internet and Society at Harvard Law School and also worked in U.S. politics as a legislative aide to U.S. Representative Fortney (Pete) Stark. John holds a Bachelor of Arts in Philosophy from Tulane University and studied modern letters at the Universite de Paris IV (La Sorbonne). He serves on the board of directors for DuraSpace and AcaWiki.

Andy Wilson, Microsoft Research

Andy Wilson is a senior researcher at Microsoft Research. His work is focused on applying sensing techniques to enable new styles of human-computer interaction. Today, that means multi-touch and gesture-based interfaces, display technologies, and so-called “natural” interfaces. In 2002, he helped found the Surface Computing group at Microsoft. Before joining Microsoft, Andy obtained a BA at Cornell University, and an MS and PhD at the MIT Media Laboratory.

Curtis Wong, Microsoft Research

Curtis Wong is manager of the Microsoft Next Media Research Group, whose focus “spans the linear and interactive media spectrum from television, broadband, and gaming to emerging media forms.” The author of more than 20 patents pending in such areas as interactive television, media browsing, visualization, design, and mobile computing, Wong was previously Director of Intel Productions. At Intel, he was responsible for creating next generation content such as the Virtual Van Gogh Museum Tour; ArtMuseum.net, one of the first Web-based, broadband art exhibition networks (which featured the National Gallery of Art’s Van Gogh’s Van Gogh’s); and the Whitney’s American Century Exhibition. ArtMuseum.net is currently featuring the “Multimedia: From Wagner to Virtual Reality” project. (Other Intel content group projects include Wong’s production of The Poetry of Structure, which accompanied Ken Burns’s film Frank Lloyd Wright and was the first enhanced digital television program to be broadcast in the U.S.) Wong was also the General Manager and Executive Producer of Corbis Productions (a division of Corbis Corporation), the digital image company started by Bill Gates, where he created a series of award winning CD-ROM’s on subjects such as A Passion for Art, FDR, Critical Mass (about the making of the Atomic Bomb), and Leonardo Da Vinci. Prior to Corbis he was an interactive-documentary producer for the Criterion Collection, producing some of the first feature films on laserdiscs and multimedia CD-ROMs for the Voyager Company. Wong’s work in interactive media has won many design and industry awards, including New York Film Festival Gold Medals, the ID Magazine Annual Design Award of Excellence, and Communication Arts Interactive award of Excellence. His collaboration with WGBH Interactive on the broadband-enhanced documentary Commanding Heights—The Battle for the World Economy, won a 2002 Academy Award from the British Academy of Film and Television Arts and was nominated for the first Interactive TV Emmy. He is currently working with WGBH Frontline on an enhanced broadband documentary for television/web scheduled to air in Summer 2006.

Curtis serves on the board of trustees for the Rhode Island School of Design and the Seattle Art Museum.

Fred Wurden, Microsoft

Fred Wurden is a partner product unit manager in the Server & Cloud Division leading the Interoperability Engineering Team. His responsibilities include EU/US regulatory compliance, interoperability principles, open source engineering, and Windows protocol development practices. He received a B.S. in Electrical Engineering in 1989 and a MBA in 2002 from the University of Washington. Prior to Microsoft, Fred was the director of technology at Applied Technical Systems, where he developed innovative search and database technology. He joined Microsoft in 1997 and has contributed to development projects in MSN, Entertainment and Devices, and Windows divisions. He holds patents in database and systems diagnostics.

Brian Zill, Microsoft Research

Brian Zill is a senior research software design engineer in the Networking Research Group at Microsoft Research Redmond. Since joining Microsoft in 1994, he has worked on a variety of projects, including Microsoft Interactive Television, IPv6, Mesh, DAIR, HAWAII, and many other research projects. He originated Microsoft’s IPv6 effort, and co-wrote the IPv6 protocol stack that was included in Windows XP and Windows Server 2003. While working on Mesh routing, he helped invent LQSR and prepared the Mesh Connectivity Layer (MCL) codebase for Microsoft Research’s Mesh Academic Resource Kit. He is also the author of the popular “TCP Analyzer Expert” add-on for the Microsoft Network Monitor tool. He is the co-author of more than 15 academic research papers and two standards-track IETF RFCs. He is an inventor on six issued U.S. Patents and more than a dozen more pending patent applications. Before joining Microsoft, he was a research developer at Carnegie Mellon University, where he worked on the Nectar gigabit networking project.

Ben Zorn, Microsoft Research

Ben Zorn is a principal researcher in the Research in Software Engineering (RiSE) group in Microsoft Research. After receiving a PhD in Computer Science from the University of California at Berkeley in 1989, he served eight years on the computer science faculty at the University of Colorado in Boulder, receiving tenure and being promoted to associate professor in 1996. Ben left the University of Colorado in 1998 to become a senior researcher at Microsoft Research, where he currently works. His research interests include programming language design and implementation and performance measurement and analysis. Ben has served as an associate editor of the ACM journals Transactions on Programming Languages and Systems and Transactions on Architecture and Code Optimization. He is currently a member-at-large of the SIGPLAN Executive Committee and he recently served as general chair for the SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2010).

DemoFest

Booth 1: SONGO: Search on the Go

SONGO improves the mobile-search user experience by caching common search results and web pages on mobile devices. First, a community-based cache is created by mining the most popular queries in mobile-search logs. This cache is updated daily, making sure that the latest popular information is always available locally. Over time, the community-based cache is personalized by adding all user search queries that do not exist in the cache. An analysis of four months of mobile-search logs shows that, on average, approximately 70 percent of search queries submitted by a user can be answered by caching 2,500 links on a 2 MB cache. A prototype implementation in Windows Mobile demonstrates responses 18 times faster and offering 30 times more energy savings comparing with querying through a 3G link.

Booth 2: LittleRock: Low-Power Continuous Sensing on Mobile Phones

LittleRock is a new mobile-phone-sensing architecture that enables continuous sensing by offloading low-level sensing tasks to a low-power, embedded processor. Mobile phones are ideal platforms for developing human-centric applications that adapt to the user, because the sensors on phones can track user context. Although there are already many applications that use such sensors, state-of-the-art phone architectures do not support continuous sensing to determine full user context. With LittleRock, energy-efficient continuous sensing on mobile phones will enable true user-centric applications, resulting in a much richer user experience. We demoed continuous audio monitoring using LittleRock, which enables scenarios such as conversation post tagging and audience-aware reminders.

Booth 3: Project Hawaii

Hawaii is a project to explore the combination of mobile devices with cloud services. We are assembling a set of cloud services that might be useful for creating mobile applications. These services include computation, storage, location, notification, and identification. We also plan to build an application exchange using these services. We are providing access to the above set of services, also known as the Hawaii platform, to several computer-science classes at top universities. The pilot offering of these classes occurred in spring 2010.

Booth 4: Natural User Interfaces with Physiological Sensing

Microsoft has innovated continually in developing novel interaction modalities, or natural user interfaces. Surface and Project Natal are two examples. While these modalities rely on sensors and devices situated in the environment, we believe there is a need for new modalities that enhance the mobile experience. We take advantage of sensing technologies that enable us to decode the signals generated by the body. We demoed muscle-computer interfaces, electromyography-based armbands that sense muscular activation directly to infer finger gestures on surfaces and in free space, and bio-acoustic interfaces—mechanical sensors on the body that enable us to turn the entire body into a tap-based input device.

Booth 5: Project Gustav: Immersive Digital Painting

Gustav is a realistic painting-system prototype that enables artists to become immersed in the digital painting experience. A natural interface makes Gustav ideal for hobbyists and professional artists alike. Gustav achieves a high level of interactivity and realism by leveraging the computing power of modern GPUs, taking full advantage of multitouch and tablet input technology and our novel, natural media-modeling and brush-simulation algorithms. Our prototype provides convincingly realistic models for pastel and oil media, with more to come.

Booth 5: Project Salzburg

Microsoft Research’s Project Salzburg can generate a procedural model from a single audio clip and apply perturbations at variation from a single model. This means it can change sound files according to what happens in a game world at a certain moment, delivering an amazing experience.

Booth 6: Sora

Microsoft Research Software Radio (Sora) is a novel software-radio platform with full programmability on commodity-PC architectures. Sora combines the performance and fidelity of hardware-based software-defined-radio (SDR) platforms with the programmability and flexibility of general-purpose processor SDR platforms. Sora uses both hardware and software techniques to address the challenges of using PC architectures for high-speed SDRs. The Microsoft Research Software Radio academic kit is available for research purposes.

Booth 7: Bringing Mars to WorldWide Telescope

WorldWide Telescope now features the largest digital-image mosaic of Mars ever created: 13,000 HiRISE images, with guided tours of Mars narrated by NASA scientists Carol Stoker and Jim Garvin.

Booth 7: Worldwide Telescope + Terapixel

Terapixel provides the largest, clearest image of the sky. We showcased a wide range of Microsoft technologies: .NET parallel extensions, Dryad, Trident, and high-performance computing.

Booth 10: Design Expo

See the best student design solutions related to this year’s theme of “Service meets Social” from the Art Center College of Design, Carnegie Mellon University, Central Saint Martins College of Art and Design, New York University’s Interactive Telecommunications Program, Universidad Iberoamericana, and the University of Washington.

Booth 12: Connecting Students to Careers

Discover ways to connect students with internships and jobs through the Imagine Cup, a student technology competition challenging students to tackle current real-world problems while strengthening technical, problem-solving, and communication skills that can translate into a future career. Learn about being a mentor to an Imagine Cup team. Also, discover Students to Business (S2B), a program that connects students with Microsoft partners and customers to gain practical experience for entry-level positions and internships. S2B students benefit from unique training and certification opportunities to build the skills they need for the job market.

Booth 12: Imagine Cup Touch & Tablet Accessibility Finalists

We presented a pair of demos from world finalists for the Touch and Tablet Accessibility Track for Microsoft’s 2010 Imagine Cup.

  • OneView is a Tablet PC-based application that enables students, both blind and sighted, to create, read, and edit diagrams collaboratively. OneView provides a synchronized multimodal interface—featuring visuals, audio, and text—that enables students to use their preferred interface mode while collaborating with other students.
  • Note-Taker is a portable, custom-designed hardware/software assistive device that improves the accessibility of higher education for students who are legally blind or have reduced vision. The technology enables such users to view streaming video of a classroom presentation while simultaneously taking notes in a split-screen interface with Microsoft OneNote.

Booth 14: New Technologies for Multi-Image Fusion

As video and still cameras have become almost ubiquitous, people are taking increasingly more photographs and videos. Often, the photographer’s intent is to capture more than what can be seen in a single photograph, and he or she instead takes a large set of images or a video clip to capture a large scene or a moment that extends over time. One can combine these images to produce an output that improves the input images, such as creating an image with a large field of view, a panorama, or a composite image that takes the best parts of the image: a photo montage. But creating these results is still non-trivial for many users. One challenge is in creating large-scale panoramas, for which the capture and stitching times can be long. In addition, when using consumer-level point-and-shoot cameras and camera phones, artifacts such as motion blur appear. Another challenge is combining large image sets from photos or videos to produce results that use the best parts of the images to create an enhanced photograph. We presented several new technologies that advance the state of the art in these areas and create improved user experiences. For panorama generation, we demonstrated ICE 2.0, stitching of panoramas from video, and generating sharp panoramas from blurry videos. For generating composites, we demonstrated video to snapshots and de-noising and sharpening using lucky imaging.

Booth 15: Boogie: A Platform for Symbolic Program Analysis Based on SMT Solvers

The Boogie intermediate verification language is a simple, imperative language with two important attributes. First, its expressive syntax and precise semantics enable natural and unambiguous encoding of the operational semantics of many real-world imperative programming languages, such as x86, C, C#, and Java. Second, it is equipped with a verifier that compactly encodes the operational semantics of a Boogie program into a logical formula whose satisfiability can be checked using powerful solvers, such as Z3. These attributes make the Boogie system useful for building a variety of symbolic program-analysis tools. We showcased four static-analysis tools with different goals, all built on top of Boogie:

  • Dafny, for proving full functional correctness of C#-like object-oriented programs
  • VCC, for proving full functional correctness of C concurrent programs
  • STORM, for finding violations of code-level protocols in concurrent programs
  • SymbolicDiff, for static differencing of program behaviors of evolving C/C# programs.

Booth 15: SPUR

Tracing just-in-time compilers (TJITs) determine frequently executed traces—hot paths and loops—in running programs and focus their optimization by emitting optimized machine code specialized to these traces. Prior work has established this strategy to be especially beneficial for dynamic languages such as JavaScript, in which the TJIT interfaces with the interpreter and produces machine code from the JavaScript trace. We try to address these issues in the SPUR research project, in which we are developing a prototype TJIT for Microsoft Common Intermediate Language (CIL, the target language of C#, VisualBasic, F#, and many other languages). Working on CIL enables TJIT optimizations for any program compiled to this platform.

Booth 16: Concurrent Revisions/TPL

The Revisions project introduces a novel programming model for concurrent applications. Our prototype provides C# programmers with a simple yet powerful and efficient way to share data safely among concurrent tasks, using concurrent revisions and isolation types. The Task Parallel Library (TPL) is a domain-specific embedded language for expressing concurrency through the use of first-class anonymous functions (delegates) and parametric polymorphism (generics). It has been integrated into Microsoft .NET 4.0 and ships with Microsoft Visual Studio 2010.

Booth 16: FORMULA

FORMULA (Formal Modeling Using Logic Analysis) analyzes domain-specific modeling languages (DSMLs) and model transformations that are expressed in a logico-algebraic framework. For example, the “domain” of a DSML is characterized by a term algebra and a set of invariants over sets of terms. Similarly, model transformations can be expressed as axioms that map sets of terms from one domain to sets of terms of another. FORMULA uses these descriptions to calculate models that satisfy—or do not satisfy—these axioms. This basic capability can be used to determine properties of DSMLs/transformations such as domain equivalence, domain emptiness, and structure-preserving maps.

Booth 17: CHESS/Cuzz

CHESS is a tool for finding and reproducing heisenbugs in concurrent programs. It repeatedly runs a concurrent test, ensuring that every run takes a different interleaving. If an interleaving results in an error, CHESS can reproduce the interleaving for improved debugging. CHESS is available for both managed and native programs. Cuzz is an effective tool for finding concurrency bugs. It works on unmodified executables and is designed for maximizing concurrency coverage for your existing, unmodified tests. It randomizes the thread schedules in a systematic, disciplined way, using an algorithm that provides probabilistic coverage guarantees. Cuzz is scalable and can run on large programs that create lots of threads.

Booth 17: SPEED

Performance measurement of software is a critical component of software development. Performance traditionally has been measured using profiling, which is often too little (only certain inputs are profiled) or too late (to make requisite changes to address the root cause before shipping). The SPEED project attempts to address these limitations by static estimation of the symbolic computational complexity of programs. It builds over recent advances in static program analysis, which traditionally has been used for checking correctness, as opposed to measuring performance.

Booth 18: Code Contracts

Code Contracts provide a language-agnostic way to express coding assumptions in Microsoft .NET programs. The contracts take the form of preconditions, postconditions, and object invariants. Contracts improve program robustness and reliability. We showed how to use the contracts for runtime checking, static contract verification, and automatic documentation generation. Code Contracts can be used with all .NET programming languages for teaching and as a platform for research.

Booth 18: Pex/Moles

Pex and Moles are Microsoft Visual Studio 2010 Power Tools that assist in unit testing of Microsoft .NET Framework applications.

  • Pex automatically generates test suites with high code coverage. Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage.
  • Moles enables you to replace any .NET method with a delegate. Moles supports unit testing by providing isolation via detours and stubs. The Moles framework is provided with Pex and can be installed by itself as a Visual Studio add-in.

Booth 19: Microsoft Translator

Microsoft Translator is the web-scale machine-translation service run by Microsoft Research, featuring a rich API and collaborative features that enable developers to combine human edits with machine translation intelligently. The API extends the capability to integrate automatic translation capabilities deeply into an application to any developer, for any research project, similar to what Microsoft has shown in Internet Explorer, Office, Bing, and Instant Messenger.

Booth 20: Chemistry Add-in for Word

The Chemistry Add-in for Word makes it easier for students, chemists, and researchers to insert and modify chemical information, such as labels, formulas, and 2-D depictions, from within Microsoft Office Word. In addition to authoring functionality, Chemistry Add-in for Word enables user denotation of inline “chemical zones,” the rendering of high-quality and print-ready visual depictions of chemical structures, and the ability to store and expose semantic-rich chemical information in a semantically rich manner.

Booth 20: Zentity

Zentity, Microsoft Research’s research-output-repository platform, aims to provide the necessary building blocks, tools, and services for developers creating and maintaining an organization’s repository ecosystem.

Booth 21: Microsoft Academic Search

Microsoft Academic Search is a free academic search engine developed by Microsoft Research. It provides many innovative ways to explore scientific papers, conferences, journals, and authors, connecting millions of scholars, students, librarians, and other users.

Booth 21: Product Search Meets the Wisdom of Crowds

We showed how Bing product search, returning object-level information for the matching product only, can be enriched by enabling navigation to related products. For each related product, our demo visualizes the type and strength of its relationship to a matching product, which complements existing “black-box” recommender systems showing recommended items with no proper explanation. Technical challenges include disambiguating different types of relationships and comparing the strength of relationships across types, which normally would require domain expertise.

Booth 22: Wandering Along the River During the Ching-ming Festival

We presented an interactive multimedia-exhibition system for showing ancient Chinese paintings of moving focus. We use one of the most famous Chinese paintings, Along the River During the Ching-ming Festival, as the exhibit. The master piece is scanned into a high-resolution, gigapixel image. The system not only presents the details of the painting, but also uses hundreds of voice dubbings and environmental sounds, organizing them into short stories to reveal the prosperous daily life of people in the Song Dynasty.

Booth 23: Microsoft Academic Search Kiosk

The Microsoft Academic Search Kiosk is built on the Microsoft Academic Search platform. It enables users to see basic information about scholars attending the Microsoft Research Faculty Summit, such as number of papers and citations, and the network of authors.

Booth 24: Microsoft Biology Foundation

The Microsoft Biology Foundation (MBF) is a language-neutral bioinformatics tool kit built as an extension to the Microsoft .NET Framework. It implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as the National Center for Biotechnology Information’s BLAST. MBF is available under an open-source license, and executables, source code, demo applications, and documentation are freely downloadable.

Booth 24: Client + Cloud Computing for Research

Scientific applications have diverse data and computation needs that scale from desktop to supercomputers. Besides the nature of the application and the domain, the resource needs for the applications also vary over time—as the collaboration and the data collections expand, or when seasonal campaigns are undertaken. Cloud computing offers a scalable, economic, on-demand model well-matched to evolving eScience needs. We presented a suite of science applications that leverage the capabilities of the Windows Azure cloud-computing platform. We showed tools and patterns we have developed to use the cloud effectively for solving problems in genomics, environmental science, and oceanography, covering both data- and compute-intensive applications.

Booth 25: Web N-gram Services to Accelerate Internet Research

Researchers need access to large-scale, real-world data when performing data-driven research at web scale. N-gram models, a type of probabilistic model for predicting the next item (word or token) in a sequence, can be used to further research in the areas of search (query suggestion and clustering), information retrieval and extraction, machine translation, speech, spelling, learning, and others. Microsoft Research, in partnership with Bing in the Online Services Division and in collaboration with professors worldwide, is providing researchers with access to petabytes of data in the form of n-grams. This cloud-based platform delivers as many as 5 grams on new content types such as title and anchor texts, in addition to document body and new model types such as smoothed models via on-demand services and regular updates. Web N-gram services are enabling new research directions and repeatability of experimentation via data benchmarks.

Booth 25: Space Situational-Awareness Azure System

Space debris has become increasingly problematic, as was made clear by the collision between the satellites Cosmos 2251 and Iridium 33 in February 2009. To track the remnants of such events, a combination of observation, data management, modeling, and operational tools requires burst-compute capability. As a global problem, this requires a global IT infrastructure. The Southampton Space Situational Awareness Azure System provides processing and visualization of tracked space objects and conjunction/impact analysis. It also supplies a decision-support system for space debris and near-Earth-object impacts.

Booth 28: Kinect for Xbox 360

Introducing Kinect for Xbox 360, a revolutionary new way to play: no controller required. See a ball? Kick it, hit it, trap it, or catch it. If you know how to move your hands, shake your hips, or speak, you and your friends can jump into the fun—the only experience needed is life experience.

DemoFest Posters

  • Client + Services + Cloud for Cancer Bioinformatics
  • Computational Biology Application Suite for High Performance Computing
  • Development and Application of Wireless Network of Geosensors for Environmental Monitoring in Tropical Forests
  • Digital Urban Informatics: Computational Innovations for Sustainable Urban Environmental and Water Resources Management
  • e-Farms: a 2-way Road from Small Farms into the Networked World
  • LEAD II: Hybrid Workflows in Atmospheric Science
  • MapReduce and Clouds for Science
  • Multi-Touch Human Robot Interaction for Disaster Response
  • Multi-Touch Interfaces for Content-Based Video Searches
  • Putting the Surface in Context
  • Statistical Methods for Large Data Sets in Search and Advancement

 

Design Expo

About the Design Expo

Each year, Microsoft Research sponsors a semester-long class at leading design schools. Students are asked to form interdisciplinary teams of two to four students to design a user experience prototype. From these groups, a representative team from each school presents its work to leading academic researchers and educators at the Faculty Summit Design Expo.

This annual forum is designed to encourage creative, exploratory thinking and allow students to hone their presentation skills, while gaining valuable feedback from notable design leaders from inside and outside of Microsoft. Students engage with other student design teams worldwide, cross-pollinate ideas and research, and develop lasting ties through social and group activities.

2010 Teams and Presentations

Design Expo 2010This year’s theme, “Service meets Social,” focuses on the marriage of exceptional process and ideas. On Monday, July 12, from 1:15 to 4:15 PM, the following six schools presented their best design solutions to the audience and selected design critics at the Design Expo portion of the Microsoft Research Faculty Summit.

Steps

Art Center College of Design, Los Angeles, CAKevin Kwok, Wayne Tang, Rachel Thai, Winnie Yuen

StepsSteps is an online resource and community for educators. With lesson plan databases, member profiles, and a series of expandable integrative applications and devices, Steps brings networking to K-12 education and allows innovative ideas to be shared beyond the classroom. View the presentation video (Length: 31:05)

GURU

Carnegie Mellon University, Pittsburgh, PAAliya Baptista, Sarah Calandro, Stephanie Meier, Eric Spaulding, Cheryl Templeton

GURUGURU is a service that helps teenagers discover and grow creative interests and learn about the vast array of creative careers with the help of industry professionals. GURU has two components—a website and a browser widget. The browser widget recommends careers and professions to teens, based on the content that they are viewing. On the website, teens can explore day-in-the-life stories and other content posted by professionals, ask questions of professionals, and share their interests with friends. GURU is based on an advertising model and is free to both teens and professionals. View the presentation video (Length: 24:47)

Data Hungry Skin | Connections in Faraday

Central Saint Martins College of Art & Design, London, UKAmy Congdon and Natsai Chieza

Data Hungry SkinThe team theme was around biotechnological services, entitled “Social Pica.” They were thinking about biotechnological alternatives to or extenders of forms of communication, offered as “social snacks.” View the presentation video (Length: 29:49)

Farmbridge

New York University, New York City, NYNoah Waxman, Julio Terra, Tianwei Liu, Cindy Wong

FarmbridgeFarmbridge is an online platform that supports local food communities by making it easier for neighbors to form groups and gain access to locally farmed food. Farmbridge offers management tools for community organizers as well as social software to allow neighbors to engage. View the presentation video (Length: 29:58)

Kueponi

Universidad Iberoamerica, Mexico City, MexicoFrancisco Martinez Weil, Valeria Narro, Maria Jose Saint Martin

KueponiBecause government-run schools in México simply cannot scale to the country’s population growth, many are left without education. Teens who are not able to attend school need a chance to improve their knowledge and skills. Kueponi is a system that creates and facilitates partnerships between universities and companies that provide teens with a chance to obtain competitive and technical skills in place of school. View the presentation video (Length: 21:35)

Open Door

University of Washington, Seattle, WAAndrew Battenburg, Minnie Bredouw, Tim Damon, Sophie Milliotte, Jon Sandler, Tanya Test

Open DoorOpen Door creates sustainable local communities through the exchange of goods and services by creating a platform that fulfills service needs, like Craigslists does, while facilitating social relationships, like Facebook does. View the presentation video (Length: 25:57)