This project aims to develop a web-based, platform-independent tool for conducting large-scale robotics user studies online. Building upon the open source PR2 Remote Lab, RobotsFor.Me (http://RobotsFor.me) enables users to remotely control robots through a common web browser, enabling researchers to conduct user studies with participants across the globe. Still under development, the current version of RobotsFor.Me includes a web interface for robot control, as well as a Robot Management System for managing users and different study conditions. The complete system is platform independent, and to date has been tested with the Rovio, youBot and PR2 platforms in both physical and simulated environments. RobotsFor.Me is the first robot remote lab designed for use by the general public.
- Russell Toris and Sonia Chernova. RobotsFor.Me and Robots For You. Interactive Machine Learning Workshop, Intelligent User Interfaces Conference, 2013.
Robot Learning from Demonstration
Robot learning from demonstration (LfD) research focuses on algorithms that enable a robot to learn new task policies from demonstrations performed by a human teacher. See the Survey of Robot Learning from Demonstration for more information on this research area. Our current work includes the first comparative evaluation of leading algorithms in this area and the development of new multi-strategy learning algorithms:
- Halit Bener Suay, Russell Toris and Sonia Chernova. A Practical Comparison of Three Robot Learning from Demonstration Algorithms. International Journal of Social Robotics, special issue on Learning from Demonstration, Volume 4, Issue 4, Page 319-330, 2012.
- Osentoski S., Pitzer B., Crick C., Graylin J., Dong S., Grollman D., Suay H.B., and Jenkins O.C. Remote Robotic Laboratories for Learning from Demonstration. In International Journal of Social Robotics, special issue on Learning from Demonstration, Volume 4, Issue 4, Page 449-461, 2012.
- Halit Bener Suay, Joseph Beck, Sonia Chernova. Using Causal Models for Learning from Demonstration. AAAI Fall Symposium on Robots Learning Interactively from Human Teachers, 2012.
- Halit Bener Suay and Sonia Chernova. A Comparison of Two Algorithms for Robot Learning from Demonstration. In the IEEE International Conference on Systems, Man, and Cybernetics, 2011.
- Halit Bener Suay and Sonia Chernova. Effect of the Human Guidance and State Space Size on Interactive Reinforcement Learning. In the IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man), 2011.
Cloud Primer: Leveraging Common Sense Computing for Early Childhood Literacy
Providing young children with opportunities to develop early literacy skills is important to their success in school, their success in learning to read, and their success in life. This project focuses on the creation of a new interactive reading primer technology on tablet computers that will foster early literacy skills and shared parent-child reading through the use of a targeted discussion-topic suggestion system aimed at the adult participant. The Cloud Primer will crowdsource the interactions and discussions of parent-child dyads across a community of readers. It will then leverage this information in combination with a common sense knowledge base to develop computational models of the interactions. These models will then be used to provide context-sensitive discussion topic suggestions to parents during the shared reading activity with young children.
- Adrian Boteanu and Sonia Chernova. Modeling Discussion Topics in Interactions with a Tablet Reading Primer. International Conference on Intelligent User Interfaces, 2013 (to appear).
- Adrian Boteanu and Sonia Chernova. Modeling Topics in User Dialog for Interactive Tablet Media. Workshop on Human Computation in Digital Entertainment at the Eighth Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 2012.
Human-Agent Transfer (HAT) is a policy learning technique that combines transfer learning, learning from demonstration and reinforcement learning to achieve rapid learning and high performance in complex domains. Using this technique we can effectively transfer knowledge from a human to an agent, even when they have different perceptions of state.
- Matthew Taylor, Halit Bener Suay and Sonia Chernova. Integrating Reinforcement Learning with Human Demonstrations of Varying Ability. In the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Taipei, Taiwan, 2011.
- Matthew E. Taylor, Halit Bener Suay and Sonia Chernova. Using Human Demonstrations to Improve Reinforcement Learning. In the AAAI 2011 Spring Symposium: Help Me Help You: Bridging the Gaps in Human-Agent Collaboration, Palo Alto, CA, 2011.
RoboCup Autonomous Robot Soccer
RoboCup is an international competition that aims to promote AI and robotics research through the development of autonomous soccer playing robots. In 2010 and 2011 WPI competed in the Standard Platform League, which requires all teams to use the Aldebaran Nao robots. The robots are not remote controlled in any way; they observe the world through two head-mounted cameras and use this information to recognize objects in the environment and their own location on the field. Robots communicate with each other using the wireless network and use on-board processing to decide which actions to take. Here is an article describing the event and WPI Warriors team.
Open Source Kinect Interface for Humanoid Robot Control
The ROS Nao-OpenNI package provides gesture-based control for humanoid robots using the Microsoft Kinect sensor. The video on the right shows the code being used to control an Aldebaran Nao.