Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation

  • Liyiming Ke ,
  • Xiujun Li ,
  • Yonatan Bisk ,
  • Ari Holtzman ,
  • Zhe Gan ,
  • JJ (Jingjing) Liu ,
  • ,
  • Yejin Choi ,
  • Siddhartha Srinivasa

CVPR 2019 |

Oral

Related File

We present FAST NAVIGATOR, a general framework for action decoding, which yields state-of-the-art results on the recent Room-to-Room (R2R) Vision-and-Language navigation challenge of Anderson et. al. (2018). Given a natural language instruction and photo-realistic image views of a previously unseen environment, the agent must navigate from a source to a target location as quickly as possible. While all of current approaches make local action decisions or score entire trajectories with beam search, our framework seamlessly balances local and global signals when exploring the environment. Importantly, this allows us to act greedily, but use global signals to backtrack when necessary. Our FAST framework, applied to existing models, yielded a 17% relative gain over the previous state-of-the-art, an absolute 6% gain on success rate weighted by path length (SPL).