Communication-Avoiding Algorithms and Fast Matrix Multiplication
- Grey Ballard | Sandia National Labs
As our computing capabilities grow, the size and complexity of numerical simulations and data analysis that today’s computational scientists conduct continue to increase. The gap between the peak capabilities of the hardware and the achieved performance of many of these computations is caused in large part by the high cost of communication, or movement of data, between processors and throughout the memory hierarchy of a single processor. As expected, much of this communication cannot be avoided when using parallelism to solve our problems; however, many standard algorithms in linear algebra move more data than necessary. We have proved lower bounds on the communication required by a wide class of algorithms and can determine whether standard approaches attain these bounds. In several cases where gaps exist between algorithms and lower bounds, we have developed new algorithms that communicate asymptotically less than previous ones and outperform them on a variety of platforms. These include algorithms for solving linear systems, least squares problems, eigenvalue problems, and parallelization of Strassen’s matrix multiplication algorithm. In particular, not only does Strassen’s algorithm require less computation that the classical matrix multiplication algorithm, it also requires less communication. In fact, fast matrix multiplication algorithms with smaller exponent than Strassen’s in their computational complexity require even less communication. I’ll talk about recent development in implementations of fast algorithms that exploit this fact and where fast matrix multiplication, often thought of as a theoretical pursuit, can be practically useful.
Speaker Details
Grey Ballard is currently a Truman Fellow at Sandia National Labs in Livermore, CA. He received his PhD in 2013 from the Computer Science Division (EECS Department) at the University of California Berkeley. He worked in the BeBOP group and Parallel Computing Laboratory under advisor James Demmel. Before coming to Berkeley, he received his BS in math and computer science at Wake Forest University in 2006 and his MA in math at Wake Forest in 2008.
His research interests include numerical linear algebra, high performance computing, and computational science, particularly in developing algorithmic ideas that translate to improved implementations and more efficient software. His work has been recognized with the SIAM Linear Algebra Prize and two conference best paper awards, at SPAA and IPDPS, and he received the C.V. Ramamoorthy Distinguished Research Award at the University of California, Berkeley, for his doctorate work.
-
-
Jeff Running
-
Watch Next
-
-
-
Accelerating MRI image reconstruction with Tyger
- Karen Easterbrook,
- Ilyana Rosenberg
-
-
-
-
-
-
-