Scalable-effort Classifiers for Energy-efficient Machine Learning

  • Swagath Venkataramani ,
  • Anand Raghunathan ,
  • Jie Liu ,
  • Shuayb Zarar

IEEE/ACM Design Automation Conference (DAC) |

Published by ACM - Association for Computing Machinery

Publication

Supervised machine-learning algorithms are used to solve classification problems across the entire spectrum of computing platforms, from data centers to wearable devices, and place significant demand on their computational capabilities. In this paper, we propose scalable-effort classifiers, a new approach to optimizing the energy efficiency of supervised machine-learning classifiers. We observe that the inherent classification difficulty varies widely across inputs in real-world datasets; only a small fraction of the inputs truly require the full computational eff ort of the classifier, while the large majority can be classified correctly with very low effort. Yet, stateof-the-art classification algorithms expend equal eff ort on all inputs, irrespective of their complexity. To address this inefficiency, we propose a systematic approach to design scalable-effort classifier that dynamically adjust their computational effort depending on the difficulty of the input data, while maintaining the same level of accuracy. Our approach utilizes a chain of classifiers with increasing levels of complexity (and accuracy). Scalable eff ort execution is achieved by modulating the number of stages used for classifying a given input. Every stage in the chain is constructed using an ensemble of biased classifiers, which is trained to detect a single class more accurately. The degree of consensus between the biased classifiers’ outputs is used to decide whether classification can be terminated at the current stage or not. Our methodology thus allows us to transform any given classification algorithm into a scalable-effort chain. We build scalable-effort versions of 8 popular recognition applications using 3 different classification algorithms. Our experiments demonstrate that scalable-effort classifiers yield 2.79 reduction in average OPS per input, which translates to 2.3 and 1.5 improvement in energy and runtime over well-optimized hardware and software implementations, respectively.