COAX project header

coax: A Modular RL Package


coax is a modular Reinforcement Learning (RL) Python package for solving OpenAI Gym (opens in new tab) environments with JAX (opens in new tab)-based function approximators (using Haiku (opens in new tab)).

RL concepts, not agents

The primary thing that sets coax apart from other packages is that is designed to align with the core RL concepts, not with the high-level concept of an agent. This makes coax more modular and user-friendly for RL researchers and practitioners.

You’re in control

Other RL frameworks often hide structure that you (the RL practitioner) are interested in. Most notably, the neural network architecture of the function approximators is often hidden from you. In coax, the network architecture takes center stage. You are in charge of defining their own forward-pass function.

Another bit of structure that other RL frameworks hide from you is the main training loop. This makes it hard to take an algorithm from paper to code. The design of coax is agnostic of the details of your training loop. You are in charge of how and when you update your function approximators.

Learn More

Documentation > (opens in new tab)

Documentation of coax, plug-n-play Reinforcement Learning in Python with OpenAI Gym and JAX (opens in new tab)

GitHub > (opens in new tab)

coax on GitHub (opens in new tab)

Webinar > (opens in new tab)

a picture of kristian holsheimer next to his webinar title beginners guide to reinforcement learning (opens in new tab)