COAX project header

coax: A Modular RL Package


coax is a modular Reinforcement Learning (RL) Python package for solving OpenAI Gym environments with JAX-based function approximators (using Haiku).

RL concepts, not agents

The primary thing that sets coax apart from other packages is that is designed to align with the core RL concepts, not with the high-level concept of an agent. This makes coax more modular and user-friendly for RL researchers and practitioners.

You’re in control

Other RL frameworks often hide structure that you (the RL practitioner) are interested in. Most notably, the neural network architecture of the function approximators is often hidden from you. In coax, the network architecture takes center stage. You are in charge of defining their own forward-pass function.

Another bit of structure that other RL frameworks hide from you is the main training loop. This makes it hard to take an algorithm from paper to code. The design of coax is agnostic of the details of your training loop. You are in charge of how and when you update your function approximators.

Learn More

Documentation >

Documentation of coax, plug-n-play Reinforcement Learning in Python with OpenAI Gym and JAX

GitHub >

coax on GitHub

Webinar >

a picture of kristian holsheimer next to his webinar title beginners guide to reinforcement learning