2nd Bandwidth Estimation Challenge at ACM MMSys 2024

Region: Global

Offline Reinforcement Learning for Bandwidth Estimation in Real Time Communications

First preliminary evaluation stage

To support the research efforts of challenge participants and enhance the overall quality of submissions ahead of the final deadline, we offer an optional small-scale preliminary evaluation opportunity for all participating teams. This will provide participants with the chance to submit up to three models per registered team for online testing. Each submitted model will be evaluated in our emulation platform by conducting 24 peer-2-peer test calls with 8 different network traces and the average objective audio/video quality scores for these models will be posted in a leaderboard on the challenge website. This initiative aims to assist participants with refining their designs, identifying potential flaws early in the process, and ultimately enhancing the robustness of the solutions.

Instructions

Registered teams who wish to leverage the preliminary evaluation opportunity should send an email to Sami Khairy with:

  1. a zip file containing up to three ONNX models. Each ONNX model should adhere to the challenge requirements (opens in new tab). Specifically, the inputs and outputs for the model should be consistent with the provided TF model class (opens in new tab) or PyTorch model (opens in new tab) and ONNX conversion therein. We have released a baseline stateless model (MLP) (opens in new tab) trained with offline RL on the emulated dataset as a reference, with an example script (opens in new tab) to run this model on the offline data. The preliminary evaluation results for this baseline model are available in the leaderboard here (opens in new tab).
  2. The name of each ONNX model should be prefixed with the registered team name.
  3. Please include “TEAMNAME_PRELIMINARY_EVALUATION” in the subject line.
  4. The deadline to submit the models via email is December 6, 2023, 11:59 PM AoE. Submissions after the deadline will not be accepted and the models will not be evaluated.
  5. Each team has the chance to send exactly one email with up to three models in a zip file. Subsequent emails will be discarded, so please make sure to include the correct models that you wish to be evaluated.
  6. Participation in the preliminary evaluation stage is completely optional but is highly recommended.
  7. We aim to conclude the preliminary evaluation and announce the results by December 15th, 2023.

Announcement

[December 6th, 2023]: preliminary evaluation submission deadline extended to December 8, 2023, 11:59 PM AoE. torch_policy.py bug fixed (opens in new tab).

Second preliminary evaluation stage

In order to further support the research endeavors of participating teams in the challenge, we are pleased to announce a second optional small-scale preliminary evaluation opportunity. The details and operational aspects of this second preliminary evaluation stage mirror those of the first stage, with the following exceptions:

  1. Each registered team can submit up to two models for online testing.
  2. The deadline for model submissions via email is January 5, 2024, 11:59 PM AoE. This is a hard deadline and there will not be any extension. Submissions received after this deadline will not be accepted, and the models will not undergo evaluation.
  3. Our aim is to conclude the preliminary evaluation process and announce the results by January 15, 2024.