Grounding Antonym Adjective Pairs through Interaction

M. Forbes, M. J. Chung, M. Cakmak, L. Zettlemoyer, and R. P. N. Rao, “Grounding Antonym Adjective Pairs through Interaction,” 2014.

Abstract

We aim to build robots that perceive the world on a higher abstraction level than their raw sensors, and can communicate this perception to humans via natural language. The focus of this work is to enable a robot to ground antonym adjective pairs in its own sensors. We present a system where a robot is interactively trained by a user, grounding the robot’s multimodal continuous sensor data in natural language symbols. This interactive training plays on the strengths of the asymmetry in human-robot interaction: not only is the training intuitive for users who understand the natural language symbols and can demonstrate the concepts to the robot, but the robot can use quick data sampling and state of the art feature extraction to accelerate the learning. Such training allows the robot to reason not only about the learned concepts, but the spaces in-between them. We show a sample interaction dialog in which a user interactively grounds antonym adjective pairs with the robot, and data showing the state of the trained model.

BibTeX Entry

@inproceedings{forbes2014hriw,
  title = {Grounding Antonym Adjective Pairs through Interaction},
  author = {Forbes, Maxwell and Chung, Michael J. and Cakmak, Maya and Zettlemoyer, Luke and Rao, Rajesh P.N.},
  year = {2014},
  booktitle = {HRI Workshop on Humans and Robots in Asymmetric Interactions},
  type = {workshop}
}