The Sense of Agency in Assistive Robotics

Sense of agency (SoA) is a theory from cognitive science that represents the feeling of control over one’s environment. As such, SoA is an important factor in the experiences of assistive robot users, particularly those with disabilities, and can determine technology use or adoption. Despite this fact and the prevalence of SoA experiences in human-machine interactions, assistive robotics literature does not usually explicitly address the user's SoA. In order to create assistive systems that will actually be adopted by users, we must understand how our autonomous assistance affects the user's SoA. In this work, we make progress toward understanding the subjective experience of SoA under different forms of autonomous assistance from a robot.

Maggie is the contact on this project.

See project on Assistive Manipulation Through Intent Recognition below for some related prior works.


Proactive Robot Learners that Ask for Help

While today’s robot learning algorithms increasingly enable people to teach robots via diverse forms of feedback (e.g., demonstration, language, etc.), they place the burden of responsibility on the human to perfectly understand what the robot doesn’t know and provide the “right” data. We’re working on ways in which instead, robots should be proactive participants---they should bear some of the burden of knowing when they don’t know and should ask for targeted help. We want to provide the robot a self-assessment of uncertainty, calibrated to human feedback received online, enabling robots to ask for strategic help online via natural language.

Questions can be directed to Michelle.

Relevant publications

Coordination with Humans via Strategy Matching.
Michelle Zhao, Reid Simmons, Henny Admoni. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022. pdf

The Role of Adaptation in Collective Human-AI Teaming.
Michelle Zhao, Fade Eadeh, Thuy-Ngoc Nguyen, Pranav Gupta, Henny Admoni, Cleotide Gonzalez, Anita Williams Woolley. Topics in Cognitive Science. 2022. pdf

Teaching agents to understand teamwork: Evaluating and predicting collective intelligence as a latent variable via Hidden Markov Models.
Michelle Zhao, Fade Eadeh, Thuy-Ngoc Nguyen, Pranav Gupta, Henny Admoni, Cleotide Gonzalez, Anita Williams Woolley. Computers in Human Behavior. 2022. pdf

Adapting Language Complexity for AI-Based Assistance.
Michelle Zhao, Reid Simmons, Henny Admoni. Workshop on Lifelong Learning and Personalization in Long-Term Human-Robot Interaction at HRI 2021. 2021. pdf


Using Theory of Mind to Improve How Robots Learn from Human Teachers

Learning from Demonstration (LfD) enables robots to learn new knowledge and skills from human teachers, but real-time LfD methods entail substantial teaching demands. Multiple factors are to blame, but especially problematic are the misunderstandings the teacher and learner have of the other’s (1) internal models and (2) communication. The goal of my research is to improve the efficacy of human teachers by enabling a robot learner to communicate in ways that correct teacher and learner misunderstandings. To do so, I am exploring how a learner can use (Second-order) Theory of Mind to model its teacher’s misunderstandings and facilitate communicative teaching sessions.

Questions can be directed to Pat Callaghan.


Modeling Drivers’ Situational Awareness from Eye Gaze

Intelligent driving assistance can alert drivers to objects in their environment; however, such systems require a model of drivers’ situational awareness (what aspects of the scene they are already aware of) to avoid unnecessary alerts. Eye-gaze is the best external representation we can get of a driver’s awareness that doesn’t involve the driver consistently communicating which objects they’ve seen. However, just because a driver’s eye gaze has fallen over an object, that doesn’t mean they’ve consciously become aware of it. As such, we are interested in modeling driver situational awareness using their eye gaze over time.

Questions can be directed to Shreeya and Pranay.

Relevant publications

Characterizing Drivers’ Peripheral Vision via the Functional Field of View for Intelligent Driving Assistance.
Abhijat Biswas, Henny Admoni. IEEE International Vehicles Symposium (IV). 2023. pdf

Mitigating Causal Confusion in Driving Agents via Gaze Supervision .
Abhijat Biswas, BA Pardhi, Caleb Chuck, Jarrett Holtz, Scott Niekum, Henny Admoni, and Alessandro Allievi. Workshop on Aligning Robot Representation with Humans @ Conference on Robot Learning (CoRL). 2022. pdf

DReyeVR: Democratizing Virtual Reality Driving Simulation for Behavioural & Interaction Research.
Gustavo Silvera*, Abhijat Biswas*, Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2022. pdf


Fostering Intelligent Social Support Networks

Social support is the perception or experience that one is loved and cared for by others, esteemed and valued, and part of a network of mutual assistance and obligations. Social support networks are linked to positive health outcomes and overall quality of life for older adults. Retirement, relocation, the loss of friends, and declines in cognitive and motor functions can create barriers for older adults, making it challenging for them to participate in social activities and form meaningful connections that foster robust social support networks. AI-related technologies offer promising opportunities to support social interactions for older adults and their support networks by reducing barriers to social engagement. As part of the AI-CARING project, I adopt an interdisciplinary and collaborative approach to explore the question: “How can we design AI assistance that fosters intelligent social support networks?”

Questions can be directed to Pragathi.


Teaching Robot Policies to Humans using Erroneous Examples

Human-robot collaboration is facilitated best when robot policies, or behaviors in different situations, are made transparent to human users. Demonstration-based explanations of robot policies have been a focus of human-robot collaboration research, but no single teaching method has been proven effective across domains, difficulties, learners, and other variables. We are interested in applying paradigms from traditional classrooms to teaching humans about robot policies, specifically the use of erroneous examples in which learners are shown incorrect responses to problems and correct them to promote understanding of common pitfalls. We aim to advance the methods by which robots educate humans on their policies to facilitate trust and ease of use.

Questions can be directed to Rithika.

Relevant publications

INQUIRE: INteractive Querying for User-aware Informative REasoning.
Tesca Fitzgerald, Pallavi Koppol, Patrick Callaghan, Russell Quinlan Jun Hei Wong, Reid Simmons, Oliver Kroemer, Henny Admoni.. Conference on Robot Learning (CoRL). 2022. pdf

Reasoning about Counterfactuals to Improve Human Inverse Reinfocement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022. pdf

Metrics for Robot Proficiency Self-Assessment and Communication of Proficiency in Human-Robot Teams.
Adam Norton, Henny Admoni, Jacob Crandall, Tesca Fitzgerald, Alvika Gautam, Michael Goodrich, Amy Saretsky, Matthias Scheutz, Reid Simmons, Aaron Steinfeld, Holly Yanco. ACM Transactions on Human-Robot Interaction (THRI), 11(3). 2022. pdf

Machine Teaching for Human Inverse Reinforcement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. Frontiers in Robotics and AI. 2021. pdf

Interaction Considerations in Learning from Humans.
Pallavi Koppol, Henny Admoni, Reid Simmons. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021. pdf

Understanding the Relationship between Interactions and Outcomes in Human-in-the-Loop Machine Learning.
Yuchen Cui *, Pallavi Koppol *, Henny Admoni, Scott Niekum, Reid Simmons, Aaron Steinfeld, Tesca Fitzgerald. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021. pdf

Iterative Interactive Reward Learning.
Pallavi Koppol, Henny Admoni, Reid Simmons. Participatory Approaches to Machine Learning Workshop at ICML. 2020. pdf


Understanding Wound Care Robotic Design Needs

As the global population ages, both the physical and financial impacts of these wounds are expected to increase. Compounding this issue is the growing national nursing shortage, which directly affects the quality of care for older adults. While assistive robots have shown promise in various healthcare applications, their potential in wound care remains largely unexplored. A critical gap exists in understanding wound care from a human-centered robotics perspective, which is essential for developing effective assistive technologies in this domain.

Questions can be directed to Zulekha, Annika and Ellen.


(Archived Project) Assistive Manipulation Through Intent Recognition


An upper body mobility limitation can severely impact a person's quality of life. Such limitations can prevent people from performing everyday tasks such as picking up a cup or opening a door. The U.S. Census Bureau has indicated that more than 8.2% of the U.S. population, or 19.9 million Americans, suffer from upper body limitations. Assistive robots offer a way for people with severe mobility impairment to complete daily tasks. However, current assistive robots primarily operate through teleoperation, which requires significant cognitive and physical effort from the user. We explore how these assistive robots can be improved with artificial intelligence to take an active role in helping their users. Drawing from our understanding of human verbal and nonverbal behaviors (like speech and eye gaze) during robot teleoperation, we study how intelligent robots can predict human intent during a task and assist toward task completion. We aim to develop technology to decrease operator fatigue and task duration when using assistive robots by employing human-sensitive shared autonomy.

Reuben, Ben and Maggie were the contacts on this project.

Relevant publications

Gaze Complements Control Input for Goal Prediction During Assisted Teleoperation.
Reuben M. Aronson, Henny Admoni. Robotics: Science and Systems. 2022. pdfsupplement

Eye Gaze for Assistive Manipulation.
Reuben M. Aronson, Henny Admoni. HRI Pioneers workshop. 2020. pdf

Semantic Gaze Labeling for Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA). 2019. pdf

Gaze for Error Detection During Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Towards a Framework for Joint Action Workshop at RSS. 2018. pdf

Eye-Hand Behavior in Human-Robot Shared Manipulation.
Reuben M. Aronson, Thiago Santini, Thomas C. Kübler, Enkelejda Kasneci, Siddhartha Srinivasa, and Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2018. pdf