Robot Object Referencing through Legible Situated Projections

T. Weng, L. Perlmutter, S. Nikolaidis, S. Srinivasa, and M. Cakmak, “Robot Object Referencing through Legible Situated Projections,” in IEEE International Conference on Robotics and Automation (ICRA), May 2019, pp. 8004–8010, doi: 10.1109/ICRA.2019.8793638.

Abstract

The ability to reference objects in the environment is a key communication skill that robots need for complex, task-oriented human-robot collaborations. In this paper we explore the use of projections, which are a powerful communication channel for robot-to-human information transfer as they allow for situated, instantaneous, and parallelized visual referencing. We focus on the question of what makes a good projection for referencing a target object. To that end, we mathematically formulate legibility of projections intended to reference an object, and propose alternative arrow-object match functions for optimally computing the placement of an arrow to indicate a target object in a cluttered scene. We implement our approach on a PR2 robot with a head-mounted projector. Through an online (48 participants) and an in-person (12 participants) user study we validate the effectiveness of our approach, identify the types of scenes where projections may fail, and characterize the differences between alternative match functions.

BibTeX Entry

@inproceedings{weng2019referencing,
  title = {Robot Object Referencing through Legible Situated Projections},
  author = {Weng, Thomas and Perlmutter, Leah and Nikolaidis, Stefanos and Srinivasa, Siddhartha and Cakmak, Maya},
  year = {2019},
  month = may,
  booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
  pages = {8004--8010},
  doi = {10.1109/ICRA.2019.8793638},
  issn = {1050-4729},
  type = {conference},
  keywords = {helmet mounted displays;human-robot interaction;mobile robots;object detection;target object;PR2 robot;robot object;legible situated projections;reference objects;complex task-oriented human-robot collaborations;robot-to-human information transfer;visual referencing;arrow-object match functions;head-mounted projector;Task analysis;Collaboration;Legged locomotion;Communication channels;Visualization;Computer science}
}