Automatic Curriculum Learning for Deep RL: A Short Survey

  • Rémy Portelas ,
  • Cédric Colas ,
  • Lilian Weng ,
  • ,
  • Pierre-Yves Oudeyer

International Joint Conference on Artificial Intelligence |

Published by International Joint Conferences on Artificial Intelligence Organization

PDF | Publication | Publication | Publication

Automatic Curriculum Learning (ACL) has become a cornerstone of recent successes in Deep Reinforcement Learning (DRL). These methods shape the learning trajectories of agents by challenging them with tasks adapted to their capacities. In recent years, they have been used to improve sample efficiency and asymptotic performance, to organize exploration, to encourage generalization or to solve sparse reward problems, among others. To do so, ACL mechanisms can act on many aspects of learning problems. They can optimize domain randomization for Sim2Real transfer, organize task presentations in multi-task robotic settings, order sequences of opponents in multi-agent scenarios, etc. The ambition of this work is dual: 1) to present a compact and accessible introduction to the Automatic Curriculum Learning literature and 2) to draw a bigger picture of the current state of the art in ACL to encourage the cross-breeding of existing concepts and the emergence of new ideas.