Parallel Thinking

Date

June 5, 2009

Speaker

Guy Blelloch

Affiliation

Carnegie Mellon University

Overview

With the advent of Multicores we are riding a third or fourth wave of parallel computing, and perhaps unlike previous ones this one will break. Many if not most computer science classes, however, remain case studies in how to push students into thinking sequentially. At the earliest stages, for example, we teach students that taking the dot product of two vectors or merging two lists involves starting at one end and sequentially traversing to the other.

In reality many problems, applications and even algorithms are inherently parallel. The languages and models we use, however, push us to describe and conceptualize them sequentially. In this talk I will describe some of the core concepts in parallel algorithms. I will point out that these ideas transcend any particular model and are thus largely robust against uncertainties in what parallel machines might look like. I will also discuss how programming languages can affect the way we think about the algorithms. This work has developed from a PROBE on “Parallel Thinking” at the Carnegie Mellon Center for Computational Thinking (Sponsored by Microsoft). The goal of the PROBE is to better understand what are the core long-term ideas in parallelism so that they might be used to help guide curriculum development. Ideas from the audience will be appreciated.

Speakers

Guy Blelloch

Guy Blelloch is a Professor of Computer Science and Associate Dean of Planning at Carnegie Mellon. He received a BA from Swarthmore College in 1983 and a PhD degree from MIT in 1988. His research interests are in programming languages and algorithms and how they interact with an emphasis on parallel computation. He worked on one of the early Parallel Machines, the Thinking Machines Connection Machine, where he developed several of the parallel primitives for the machine. At Carnegie Mellon Blelloch designed and implemented the parallel programming language NESL, a language designed for easily expressing and analyzing parallel algorithms. Other work on parallelism has addressed issues in scheduling, algorithm design, cache efficiency, garbage collection, and synchronization primitives.

‚Äč