Neural Acceleration for General-Purpose Approximate Programs

  • Hadi Esmaeilzadeh | MSR-XCG Intern; University of Washington PhD Candidate

We are exploring a learning-based approach to the acceleration of approximate programs. We describe a program transformation, called the Parrot transformation, that selects and trains a neural network to mimic a region of imperative code. After the learning transformation phase, the compiler replaces the original code with an invocation of a low-power accelerator called a neural processing unit (NPU). The NPU is tightly coupled to the processor’s speculative pipeline, since many of the accelerated code regions are small. Since neural networks produce inherently approximate results, we define a programming model that allows programmers to identify approximable code regions—code that can produce imprecise but acceptable results. Mimicking approximable code regions with an NPU is both faster and more energy efficient than executing the original code. For a set of diverse applications, NPU acceleration provides significant speedups and energy savings with dedicated digital hardware. We study how efficient software implementation of neural network with a few extensions to the ISA can result in application speedup with the Parrot transformation even without dedicated hardware implementation of neural networks. We further explore the feasibility of using analog NPUs to accelerate imperative approximate code. I will present the results from these studies.

Speaker Details

Hadi Esmaeilzadeh is a PhD student in the Department of Computer Science at the University of Washington working with Doug Burger and Luis Ceze on Approximate Computing and Neural Accelerators.

    • Portrait of Jeff Running

      Jeff Running

Series: Microsoft Research Talks