Augmentative and Alternative Communication

A close-up picture of a computer screen showing AAC pictogram images describing different words such as Up, Down, Me, You.

Speech generating alternative and augmentative communication (AAC) devices may be used to express thoughts, needs, wants, and ideas when someone cannot rely on their own speech to communicate. Some examples of these devices include specialized keyboards along with adapted controllers and text-to-speech interfaces. We study conversational agency in AAC, how people who use AAC devices augmented communicators, advance their goals in conversation under social and AAC constraints. We study how people use AAC devices with different types of conversation partners to inform new designs that can reduce user burden and favor the expression of conversational agency of augmented communicators.

For more information, please contact Stephanie.

Check out our CHI 2020 video presentation to learn more.

Relevant publications

Co-designing Socially Assistive Sidekicks for Motion-based AAC.
Stephanie Valencia, Michal Luria, Amy Pavel, Jeffrey P. Bigham, Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2021. pdf


Robot Self-Assesment (MURI)

A Baxter humanoid robot uses its gripper on its arm to stack a Jenga block on top of the tower that a human is constructing.

Autonomous agents need to learn increasingly competent and complex behaviors. One way of effectively learning these behaviors is to include people in the learning process. Therefore, we are investigating human-in-the-loop strategies that are both more user friendly and lead to efficient learning. Please direct any questions to Pallavi Koppol.

Relevant publications

INQUIRE: INteractive Querying for User-aware Informative REasoning.
Tesca Fitzgerald, Pallavi Koppol, Patrick Callaghan, Russell Quinlan Jun Hei Wong, Reid Simmons, Oliver Kroemer, Henny Admoni.. Conference on Robot Learning (CoRL). 2022. pdf

Reasoning about Counterfactuals to Improve Human Inverse Reinfocement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022. pdf

Metrics for Robot Proficiency Self-Assessment and Communication of Proficiency in Human-Robot Teams.
Adam Norton, Henny Admoni, Jacob Crandall, Tesca Fitzgerald, Alvika Gautam, Michael Goodrich, Amy Saretsky, Matthias Scheutz, Reid Simmons, Aaron Steinfeld, Holly Yanco. ACM Transactions on Human-Robot Interaction (THRI), 11(3). 2022. pdf

Machine Teaching for Human Inverse Reinforcement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. Frontiers in Robotics and AI. 2021. pdf

Interaction Considerations in Learning from Humans.
Pallavi Koppol, Henny Admoni, Reid Simmons. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021. pdf

Understanding the Relationship between Interactions and Outcomes in Human-in-the-Loop Machine Learning.
Yuchen Cui *, Pallavi Koppol *, Henny Admoni, Scott Niekum, Reid Simmons, Aaron Steinfeld, Tesca Fitzgerald. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021. pdf

Iterative Interactive Reward Learning.
Pallavi Koppol, Henny Admoni, Reid Simmons. Participatory Approaches to Machine Learning Workshop at ICML. 2020. pdf


Assistive Manipulation Through Intent Recognition


An upper body mobility limitation can severely impact a person's quality of life. Such limitations can prevent people from performing everyday tasks such as picking up a cup or opening a door. The U.S. Census Bureau has indicated that more than 8.2% of the U.S. population, or 19.9 million Americans, suffer from upper body limitations. Assistive robots offer a way for people with severe mobility impairment to complete daily tasks. However, current assistive robots primarily operate through teleoperation, which requires significant cognitive and physical effort from the user. We explore how these assistive robots can be improved with artificial intelligence to take an active role in helping their users. Drawing from our understanding of human verbal and nonverbal behaviors (like speech and eye gaze) during robot teleoperation, we study how intelligent robots can predict human intent during a task and assist toward task completion. We aim to develop technology to decrease operator fatigue and task duration when using assistive robots by employing human-sensitive shared autonomy.

Reuben, Ben and Maggie were the contacts on this project.

Relevant publications

Gaze Complements Control Input for Goal Prediction During Assisted Teleoperation.
Reuben M. Aronson, Henny Admoni. Robotics: Science and Systems. 2022. pdfsupplement

Eye Gaze for Assistive Manipulation.
Reuben M. Aronson, Henny Admoni. HRI Pioneers workshop. 2020. pdf

Semantic Gaze Labeling for Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA). 2019. pdf

Gaze for Error Detection During Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Towards a Framework for Joint Action Workshop at RSS. 2018. pdf

Eye-Hand Behavior in Human-Robot Shared Manipulation.
Reuben M. Aronson, Thiago Santini, Thomas C. Kübler, Enkelejda Kasneci, Siddhartha Srinivasa, and Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2018. pdf


Uncertainty Estimation and Resolution in Task Transfer


Adaptability is an essential skill in human cognition, enabling us to draw from our extensive, life-long experiences with various objects and tasks in order to address novel problems. To date, robots do not have this kind of adaptability, and yet, as our expectations of robots’ interactive and assistive capacity grows, it will be increasingly important for them to adapt to unpredictable environments in a similar manner as humans.

We explore how different types of interaction enable a robot to address novel task variations. Prior work has shown how different types of transfer problems can be addressed via continued interaction between the teacher and robot. Using a variety of interaction types allows a robot to obtain different task information and then address transfer problems of various complexity, such as identifying object replacements and creative tool use. Our current work involves assessing the robot’s proficiency at a task; in order for a robot to attempt to address a novel task variation, it needs to assess what knowledge it needs and which interaction type is most likely to provide it.

Questions can be directed to Tesca.


Robot Self-Assesment (MURI)

A Baxter humanoid robot uses its gripper on its arm to stack a Jenga block on top of the tower that a human is constructing.

When a robot is uncertain about how it should complete a task, it should ask a human teacher for help. Doing this, however, requires the robot to locate the source of its uncertainty and the most effective method of querying the teacher in order to resolve that uncertainty. We are developing methods to address both problems by modeling the robot’s expected and actual knowledge throughout completing a task or interacting with a teacher. Please contact Tesca Fitzgerald for more information.


Recognizing and Reacting to Human Needs Determined by Social Signals

A video from a cooking show with pose and gaze points overlaid.

Being able to identify which humans need help and when they need help will enable robots to spontaneously offer assistance when needed, as well as triage how their help can best be distributed. To perform this kind of assessment requires an understanding of how humans naturally communicate their needs to others, as well as a model of individuals and their needs over time. To achieve and demonstrate these goals, this project seeks to build a waiter robot that can anticipate customer needs and respond to them both when actively hailed or implicitly needed. This environment also showcases the challenge of finding these signals while humans are also engaged in human-human group interactions and are not solely focused on their robot collaborator. Successfully implementing this system can help improve restaurant efficiency, and provide insight into how to model human thinking.

This project has been completed, but questions can be directed to Ada.

Relevant publications

Activity Recognition in Restaurants to Address Underlying Needs: A Case Study.
Ada V. Taylor, Roman Kaufman, Michael Huang, Henny Admoni. Proceedings of IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). 2022. pdf