Augmentative and Alternative Communication

A close-up picture of a computer screen showing AAC pictogram images describing different words such as Up, Down, Me, You.

Speech generating alternative and augmentative communication (AAC) devices may be used to express thoughts, needs, wants, and ideas when someone cannot rely on their own speech to communicate. Some examples of these devices include specialized keyboards along with adapted controllers and text-to-speech interfaces. We study conversational agency in AAC, how people who use AAC devices augmented communicators, advance their goals in conversation under social and AAC constraints. We study how people use AAC devices with different types of conversation partners to inform new designs that can reduce user burden and favor the expression of conversational agency of augmented communicators.

For more information, please contact Stephanie.

Check out our CHI 2020 video presentation to learn more.

Relevant publications

Co-designing Socially Assistive Sidekicks for Motion-based AAC.
Stephanie Valencia, Michal Luria, Amy Pavel, Jeffrey P. Bigham, Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2021. pdf


Robot Self-Assesment (MURI)

A Baxter humanoid robot uses its gripper on its arm to stack a Jenga block on top of the tower that a human is constructing.

Autonomous agents need to learn increasingly competent and complex behaviors. One way of effectively learning these behaviors is to include people in the learning process. Therefore, we are investigating human-in-the-loop strategies that are both more user friendly and lead to efficient learning. Please direct any questions to Pallavi Koppol.

Relevant publications

INQUIRE: INteractive Querying for User-aware Informative REasoning.
Tesca Fitzgerald, Pallavi Koppol, Patrick Callaghan, Russell Quinlan Jun Hei Wong, Reid Simmons, Oliver Kroemer, Henny Admoni.. Conference on Robot Learning (CoRL). 2022. pdf

Reasoning about Counterfactuals to Improve Human Inverse Reinfocement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022. pdf

Metrics for Robot Proficiency Self-Assessment and Communication of Proficiency in Human-Robot Teams.
Adam Norton, Henny Admoni, Jacob Crandall, Tesca Fitzgerald, Alvika Gautam, Michael Goodrich, Amy Saretsky, Matthias Scheutz, Reid Simmons, Aaron Steinfeld, Holly Yanco. ACM Transactions on Human-Robot Interaction (THRI), 11(3). 2022. pdf

Machine Teaching for Human Inverse Reinforcement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. Frontiers in Robotics and AI. 2021. pdf

Interaction Considerations in Learning from Humans.
Pallavi Koppol, Henny Admoni, Reid Simmons. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021. pdf

Understanding the Relationship between Interactions and Outcomes in Human-in-the-Loop Machine Learning.
Yuchen Cui *, Pallavi Koppol *, Henny Admoni, Scott Niekum, Reid Simmons, Aaron Steinfeld, Tesca Fitzgerald. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021. pdf

Iterative Interactive Reward Learning.
Pallavi Koppol, Henny Admoni, Reid Simmons. Participatory Approaches to Machine Learning Workshop at ICML. 2020. pdf


Assistive Manipulation Through Intent Recognition


An upper body mobility limitation can severely impact a person's quality of life. Such limitations can prevent people from performing everyday tasks such as picking up a cup or opening a door. The U.S. Census Bureau has indicated that more than 8.2% of the U.S. population, or 19.9 million Americans, suffer from upper body limitations. Assistive robots offer a way for people with severe mobility impairment to complete daily tasks. However, current assistive robots primarily operate through teleoperation, which requires significant cognitive and physical effort from the user. We explore how these assistive robots can be improved with artificial intelligence to take an active role in helping their users. Drawing from our understanding of human verbal and nonverbal behaviors (like speech and eye gaze) during robot teleoperation, we study how intelligent robots can predict human intent during a task and assist toward task completion. We aim to develop technology to decrease operator fatigue and task duration when using assistive robots by employing human-sensitive shared autonomy.

Reuben, Ben and Maggie were the contacts on this project.

Relevant publications

Gaze Complements Control Input for Goal Prediction During Assisted Teleoperation.
Reuben M. Aronson, Henny Admoni. Robotics: Science and Systems. 2022. pdfsupplement

Eye Gaze for Assistive Manipulation.
Reuben M. Aronson, Henny Admoni. HRI Pioneers workshop. 2020. pdf

Semantic Gaze Labeling for Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA). 2019. pdf

Gaze for Error Detection During Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Towards a Framework for Joint Action Workshop at RSS. 2018. pdf

Eye-Hand Behavior in Human-Robot Shared Manipulation.
Reuben M. Aronson, Thiago Santini, Thomas C. Kübler, Enkelejda Kasneci, Siddhartha Srinivasa, and Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2018. pdf


Uncertainty Estimation and Resolution in Task Transfer


Adaptability is an essential skill in human cognition, enabling us to draw from our extensive, life-long experiences with various objects and tasks in order to address novel problems. To date, robots do not have this kind of adaptability, and yet, as our expectations of robots’ interactive and assistive capacity grows, it will be increasingly important for them to adapt to unpredictable environments in a similar manner as humans.

We explore how different types of interaction enable a robot to address novel task variations. Prior work has shown how different types of transfer problems can be addressed via continued interaction between the teacher and robot. Using a variety of interaction types allows a robot to obtain different task information and then address transfer problems of various complexity, such as identifying object replacements and creative tool use. Our current work involves assessing the robot’s proficiency at a task; in order for a robot to attempt to address a novel task variation, it needs to assess what knowledge it needs and which interaction type is most likely to provide it.

Questions can be directed to Tesca.


Robot Self-Assesment (MURI)

A Baxter humanoid robot uses its gripper on its arm to stack a Jenga block on top of the tower that a human is constructing.

When a robot is uncertain about how it should complete a task, it should ask a human teacher for help. Doing this, however, requires the robot to locate the source of its uncertainty and the most effective method of querying the teacher in order to resolve that uncertainty. We are developing methods to address both problems by modeling the robot’s expected and actual knowledge throughout completing a task or interacting with a teacher. Please contact Tesca Fitzgerald for more information.


Recognizing and Reacting to Human Needs Determined by Social Signals

A video from a cooking show with pose and gaze points overlaid.

Being able to identify which humans need help and when they need help will enable robots to spontaneously offer assistance when needed, as well as triage how their help can best be distributed. To perform this kind of assessment requires an understanding of how humans naturally communicate their needs to others, as well as a model of individuals and their needs over time. To achieve and demonstrate these goals, this project seeks to build a waiter robot that can anticipate customer needs and respond to them both when actively hailed or implicitly needed. This environment also showcases the challenge of finding these signals while humans are also engaged in human-human group interactions and are not solely focused on their robot collaborator. Successfully implementing this system can help improve restaurant efficiency, and provide insight into how to model human thinking.

This project has been completed, but questions can be directed to Ada.

Relevant publications

Activity Recognition in Restaurants to Address Underlying Needs: A Case Study.
Ada V. Taylor, Roman Kaufman, Michael Huang, Henny Admoni. Proceedings of IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). 2022. pdf

Eye-gaze for Intelligent Driving Assistance

A video showing a driver's POV in a simulator with their gaze overlaid.

Using Programmable Light Curtains - an active sensor, we can obtain 3D information about the world in a more precise, dense, and frequent manner than with LiDAR devices. We study the use of driver eye gaze to inform policies of placing and shaping the light curtain for autonomous and assisted driving.

Please contact Abhijat or Gustavo for more information.

Relevant publications

Characterizing Drivers’ Peripheral Vision via the Functional Field of View for Intelligent Driving Assistance.
Abhijat Biswas, Henny Admoni. IEEE International Vehicles Symposium (IV). 2023. pdf

Mitigating Causal Confusion in Driving Agents via Gaze Supervision .
Abhijat Biswas, BA Pardhi, Caleb Chuck, Jarrett Holtz, Scott Niekum, Henny Admoni, and Alessandro Allievi. Workshop on Aligning Robot Representation with Humans @ Conference on Robot Learning (CoRL). 2022. pdf

DReyeVR: Democratizing Virtual Reality Driving Simulation for Behavioural & Interaction Research.
Gustavo Silvera*, Abhijat Biswas*, Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2022. pdf


A video showing a social navigation simulator with different navigation algorithms all trying to complete the same navigation scenario.

Evaluation tools for Social Robot Navigation

We are actively maintaining SocNavBench, a social navigation benchmark for evaluating social navigation algorithms against each other in a realistic, consistent, and scalable way. Checkout the github page!

Please contact Abhijat or Gustavo for more information.

Relevant publications

SocNavBench: A Grounded Simulation Testing Framework for Social Navigation.
Abhijat Biswas, Allan Wang, Gustavo Silvera, Aaron Steinfeld, Henny Admoni. ACM Transactions on Human-Robot Interaction (THRI). 2021. pdf


Robot Self-Assesment (MURI)

A Baxter humanoid robot uses its gripper on its arm to stack a Jenga block on top of the tower that a human is constructing.

Even after autonomous agents have learned complex behaviors, we likely won’t rely on them until we can reliably predict their behavior in new situations. Thus we are researching how agents can teach the nuances of their learned behaviors to humans through well-selected demonstrations. We first leverage inverse reinforcement learning and human learning strategies (e.g. scaffolding) to select demonstrations that are both informative and easily understood by humans. We then ask humans to predict agent behavior in unseen environments to test their understanding and inform the next demonstrations to be shown in a closed-loop teaching process. Please contact Michael Lee for more information.

Relevant publications

INQUIRE: INteractive Querying for User-aware Informative REasoning.
Tesca Fitzgerald, Pallavi Koppol, Patrick Callaghan, Russell Quinlan Jun Hei Wong, Reid Simmons, Oliver Kroemer, Henny Admoni.. Conference on Robot Learning (CoRL). 2022. pdf

Reasoning about Counterfactuals to Improve Human Inverse Reinfocement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022. pdf

Metrics for Robot Proficiency Self-Assessment and Communication of Proficiency in Human-Robot Teams.
Adam Norton, Henny Admoni, Jacob Crandall, Tesca Fitzgerald, Alvika Gautam, Michael Goodrich, Amy Saretsky, Matthias Scheutz, Reid Simmons, Aaron Steinfeld, Holly Yanco. ACM Transactions on Human-Robot Interaction (THRI), 11(3). 2022. pdf

Machine Teaching for Human Inverse Reinforcement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. Frontiers in Robotics and AI. 2021. pdf

Interaction Considerations in Learning from Humans.
Pallavi Koppol, Henny Admoni, Reid Simmons. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021. pdf

Understanding the Relationship between Interactions and Outcomes in Human-in-the-Loop Machine Learning.
Yuchen Cui *, Pallavi Koppol *, Henny Admoni, Scott Niekum, Reid Simmons, Aaron Steinfeld, Tesca Fitzgerald. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021. pdf

Iterative Interactive Reward Learning.
Pallavi Koppol, Henny Admoni, Reid Simmons. Participatory Approaches to Machine Learning Workshop at ICML. 2020. pdf


Audience-Aware Legibility

A view of a legible path which responds to an observer's view, versus one that moves without considering the observer.

Robots often need to communicate their goals to humans when navigating in a shared space to assist observers in anticipating the robot’s future actions. These human observers are often scattered throughout the environment, and each observer only has a partial view of the robot and its movements. A path that non-verbally communicates with multiple observers will need to be sufficiently understood by all of them. We aim to create an algorithm for intent-expressive, legible motion that takes into account the perspectives of multiple observers with limited fields of view in order to balance communicating with multiple observers effectively. Prior work in legible motion does not account for the limited field of view of observers, which can lead to wasted communication efforts that are unobserved by the intended audience, which we have improved upon with observer-aware legibility. Our user studies have shown that audience-aware legibility will require accounting for even more nuanced tradeoffs to enable better performance across multiple observers with different constraints.

Questions can be directed to Ada.

Relevant publications

Observer-Aware Legibility for Social Navigation.
Ada V. Taylor, Ellie Mamantov, Henny Admoni. Proceedings of IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). 2022. pdf

Wait Wait, Nonverbally Tell Me: Legibility for Use in Restaurant Navigation.
Ada V. Taylor, Ellie Mamantov, and Henny Admoni. Workshop on Social Robot Navigation at RSS 2021. 2021. pdf

Now You See It: The Effect of Multiple Audience Perspectives on Path Legibility.
Ada V. Taylor, Henny Admoni. Workshop on AIxFood at IJCAI-PRICAI 2020. 2021.