SumEval logo
November 20, 2022

SUMEval 2022

Scaling Up Multilingual Evaluation Workshop @ AACL 2022

Location: Online only

Massively Multilingual Language Models (MMLMs) are trained on around 100 languages of the world, however, most existing multilingual NLP benchmarks provide evaluation data in only a handful of these languages. The languages present in evaluation benchmarks are usually high-resource and largely belong to the Indo-European language family. This makes current multilingual evaluation unreliable and does not provide a full picture of the performance of MMLMs across the linguistic landscape. Although efforts are being made to create benchmarks that cover a larger variety of tasks, languages, and language families, it is unlikely that we will be able to build benchmarks covering all languages and tasks. Due to this, there is recent interest in alternate strategies for evaluating MMLMs, including performance prediction and Machine Translation of test data. We believe that this is an important yet relatively unexplored area of research that has the potential to make language technologies accessible to all. The SUMEval workshop will accept submissions on alternate techniques for scaling up multilingual evaluation. In addition, the workshop will also include a shared task on performance prediction.

Timeline

Dates subject to change and will be updated here as needed.

June 28, 2022: Challenge data released
August 1, 2022: Challenge evaluation begins
August 10, 2022: Challenge ends
August 25, 2022 September 23 2022: Workshop paper submission deadline
October 7, 2022: Notification of Acceptance
October 24, 2022: Camera-ready papers due
November 20 2022: SUMEval 2022 Workshop

Topics of interest include but are not restricted to:

  • Studies on scaling up multilingual evaluation
  • Human evaluation of multilingual models
  • Automated evaluation metrics for multilingual evaluation
  • Studies on fairness and other aspects of evaluation
  • Data sets, benchmarks or libraries for evaluating multi-lingual models
  • Probing and analysis of multilingual models

Organizers: