AsyMo: Scalable and Efficient Deep-Learning Inference on Asymmetric Mobile CPUs

  • Manni Wang ,
  • Shaohua Ding ,
  • ,
  • Yunxin Liu ,
  • Fengyuan Xu

The 27th Annual International Conference On Mobile Computing And Networking (MobiCom'21) |

Organized by ACM

On-device deep learning (DL) inference has attracted vast interest. Mobile CPUs are the most common hardware for on-device inference and many inference frameworks have been developed for them. Yet, due to the hardware complexity, DL inference on mobile CPUs suffers from two common issues: the poor performance scalability on the asymmetric multiprocessor, and energy inefficiency.

We identify the root causes are improper task partitioning and unbalanced task distribution for the poor scalability, and unawareness of model behaviour for energy inefficiency. Based on that, we propose a novel technique called AsyMo for the thread pool implementation of DL frameworks to solve the two issues. The key design principle is to leverage the execution determinism of DL inference, and build an optimal execution plan offline by jointly considering model structures and hardware characteristics. For performance scalability, AsyMo implements cost-model-directed partitioning and asymmetry-aware task scheduling to properly divide and fairly schedule tasks on asymmetric CPUs. For energy saving, AsyMo determines the least-energy cost frequency based on data reuse rate of a model. AsyMo is evaluated on different models and DL frameworks. All gain substantial improvement. For example, AsyMo shows up-to 46% performance and 37% energy-efficiency improvement for convolution-dominant models, and up to 97% performance and1.22×energy-efficiency improvement for fully-connect-dominant models, compared to an optimized TensorFlow on off-the-shelf mo-bile CPUs.