Collaborative Robot Learning From Demonstration Using Hierarchical Task Networks

The objective of this work is to scale up robot learning from demonstration (LfD) to larger and more complex tasks than currently possible, so that robots can move out of highly constrained, repetitive applications, such as manufacturing, and into human-oriented environments with greater autonomy. In LfD, a human expert/teacher performs a small number of examples (sometimes only one) of a new task behavior and machine learning software on the robot infers the appropriate generalizations to remember and perform the behavior autonomously in future situations. The robot will also employ active learning, enabling it to request additional specific information from the human teacher and make suggestions as needed to make the LfD more efficient. The end result of this project will be learning algorithms and an interaction interface that enables a human subject-matter expert to effectively train a robot to perform new tasks with full autonomy.
Supported in part by the Office of Naval Research under Grant N00014-13-1-0735.
  • Principal Investigator: Charles Rich

  • Co-Principal Investigator: Sonia Chernova
  • Co-Principal Investigator: Dmitry Berenson
  • Co-Principal Investigator: Candace L. Sidner

  • Postdoctoral Fellow: Jim Mainprice
  • Graduate Students: Anahita Mohseni-Kabir, Artem Gritsenko, Changshou Li, Daniel Miller, Jun Tang, Victoria Wu
  • Undergraduate Students: Benjamin Hylak
List of Downloadable Publications

Simultaneous Learning of Hierarchy and Primitives (SLHAP)

SLHAP Proof of Concept, August 2016