Motion Capture with Actions (MoCapAct)
The MoCapAct dataset contains training data and models for humanoid locomotion research. It consists of expert policies that are trained to track individual clip snippets and HDF5 files of noisy rollouts collected from each expert, including proprioceptive observations and actions. We demonstrate the utility of MoCapAct by using it to train a single hierarchical policy capable of tracking the entire MoCap dataset within dm_control and show the learned low-level component can be re-used to efficiently learn high-level other tasks. Finally, we use MoCapAct to train an autoregressive GPT model and show that it can perform natural motion completion given a motion prompt.