Vision-and-Dialog Navigation

J. Thomason, M. Murray, M. Cakmak, and L. Zettlemoyer, “Vision-and-Dialog Navigation,” in Conference on Robot Learning (CoRL), Nov. 2019, vol. 100, pp. 394–406, [Online]. Available at: https://proceedings.mlr.press/v100/thomason20a.html.

Abstract

Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at https://cvdn.dev/

BibTeX Entry

@inproceedings{thomason2019visiondialog,
  title = {Vision-and-Dialog Navigation},
  author = {Thomason, Jesse and Murray, Michael and Cakmak, Maya and Zettlemoyer, Luke},
  year = {2019},
  month = nov,
  booktitle = {Conference on Robot Learning (CoRL)},
  publisher = {{PMLR}},
  series = {Proceedings of Machine Learning Research},
  volume = {100},
  pages = {394--406},
  url = {https://proceedings.mlr.press/v100/thomason20a.html},
  type = {conference},
  editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}
}