COMPASS
This repository contains the PyTorch implementation of the COMPASS model proposed in our paper: COMPASS: Contrastive Multimodal Pretraining for Autonomous Systems. COMPASS aims to build general purpose representations for autonomous systems from multimodal observations. Given multimodal signals of spatial and temporal modalities M_s and M_m, respectively. COMPASS learns two factorized latent spaces, i.e., a motion pattern space O_m and a current state space O_s, using multimodal correspondence as the self-supervisory signal.