Edinburgh’s Submission to the WMT 2022 Efficiency Task

  • Nikolay Bogoychev ,
  • ,
  • J. van der Linde ,
  • Graeme Nail ,
  • Kenneth Heafield ,
  • Biao Zhang ,
  • Sidharth Kashyap

Proceedings of the Seventh Conference on Machine Translation (WMT) |

Published by Association for Computational Linguistics

We participated in all tracks of the WMT 2022 efficient machine translation task: single-core CPU, multi-core CPU, and GPU hardware with throughput and latency conditions. Our submissions explores a number of several efficiency strategies: knowledge distillation, a simpler simple recurrent unit (SSRU) decoder with one or two layers, shortlisting, deep encoder, shallow decoder, pruning and bidirectional decoder. For the CPU track, we used quantized 8-bit models. For the GPU track, we used FP16 quantisation. We explored various pruning strategies and combination of one or more of the above methods.