Assistive Manipulation Through Intent Recognition


An upper body mobility limitation can severely impact a person's quality of life. Such limitations can prevent people from performing everyday tasks such as picking up a cup or opening a door. The U.S. Census Bureau has indicated that more than 8.2% of the U.S. population, or 19.9 million Americans, suffer from upper body limitations. Assistive robots offer a way for people with severe mobility impairment to complete daily tasks. However, current assistive robots primarily operate through teleoperation, which requires significant cognitive and physical effort from the user. We explore how these assistive robots can be improved with artificial intelligence to take an active role in helping their users. Drawing from our understanding of human verbal and nonverbal behaviors (like speech and eye gaze) during robot teleoperation, we study how intelligent robots can predict human intent during a task and assist toward task completion. We aim to develop technology to decrease operator fatigue and task duration when using assistive robots by employing human-sensitive shared autonomy.

Ben and Maggie are the contacts on this project.

Relevant publications

Gaze Complements Control Input for Goal Prediction During Assisted Teleoperation.
Reuben M. Aronson, Henny Admoni. Robotics: Science and Systems. 2022. pdfsupplement

HARMONIC: A Multimodal Dataset of Assistive Human–Robot Collaboration.
Benjamin A. Newman *, Reuben M. Aronson *, Siddhartha S. Srinivasa, Kris Kitani, Henny Admoni. The International Journal of Robotics Research (IJRR). 2021. pdf

Inferring Goals with Gaze during Teleoperated Manipulation.
Reuben M. Aronson, Nadia AlMutlak, and Henny Admoni. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2021. pdf

Eye Gaze for Assistive Manipulation.
Reuben M. Aronson, Henny Admoni. HRI Pioneers workshop. 2020. pdf

Semantic Gaze Labeling for Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA). 2019. pdf

Gaze for Error Detection During Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Towards a Framework for Joint Action Workshop at RSS. 2018. pdf

Eye-Hand Behavior in Human-Robot Shared Manipulation.
Reuben M. Aronson, Thiago Santini, Thomas C. Kübler, Enkelejda Kasneci, Siddhartha Srinivasa, and Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2018. pdf


Mutual Adaptation in Human-Robot Collaboration

Effective team collaboration involves many factors, including understanding capabilities, coordination, and communication. In the context of collaborative tasks, people have different goals, as well as different preferred strategies for accomplishing them. While robots adapt to human partners, humans are simultaneously adapting to the robot. Mutual adaptation occurs when both partners can infer each other’s preferences and adapt their own behavior as necessary. How can robots reason about how its actions will affect a human partner? How can actions be strategically selected to elicit specific behavior from a human partner? We aim to provide insight on team coordination and extend existing frameworks for human-robot collaboration by exploring the effects of communication, collaboration, and mutual coordination on team fluency and performance.

Questions can be directed to Michelle.

Relevant publications

Coordination with Humans via Strategy Matching.
Michelle Zhao, Reid Simmons, Henny Admoni. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022. pdf

The Role of Adaptation in Collective Human-AI Teaming.
Michelle Zhao, Fade Eadeh, Thuy-Ngoc Nguyen, Pranav Gupta, Henny Admoni, Cleotide Gonzalez, Anita Williams Woolley. Topics in Cognitive Science. 2022. pdf

Teaching agents to understand teamwork: Evaluating and predicting collective intelligence as a latent variable via Hidden Markov Models.
Michelle Zhao, Fade Eadeh, Thuy-Ngoc Nguyen, Pranav Gupta, Henny Admoni, Cleotide Gonzalez, Anita Williams Woolley. Computers in Human Behavior. 2022. pdf

Adapting Language Complexity for AI-Based Assistance.
Michelle Zhao, Reid Simmons, Henny Admoni. Workshop on Lifelong Learning and Personalization in Long-Term Human-Robot Interaction at HRI 2021. 2021. pdf


Learning from Various Forms of Human-Provided Feedback

Humans can convey useful information to one another through a host of seemingly-disparate modalities. For example, if I want to teach someone to set a breakfast table, I could demonstrate how to set the table, I could show the person two table settings and indicate which is my preferred choice (ie. indicate a preference), I could critique a person’s attempted table setting by telling them it was “good” or “bad,” or I could go a step further and correct a person’s attempt to show them where they erred. Each of these interactions is different in form, yet they each convey similar information that is useful to accomplish my teaching goal. When robots join us in daily living, we’ll expect them to interpret and learn from our various forms of communication as we do—a feat that is so effortless for us, but a feat which is still so challenging for them.

This line of research explores learning from different interaction types in the service of two primary questions: How might we ease the challenge of interpretation for the robot? And, how might we ease the teaching burden placed upon the human?

Questions can be directed to Pat Callaghan.