Eye-gaze for Intelligent Driving Assistance

A video showing a driver's POV in a simulator with their gaze overlaid.

Using Programmable Light Curtains - an active sensor, we can obtain 3D information about the world in a more precise, dense, and frequent manner than with LiDAR devices. We study the use of driver eye gaze to inform policies of placing and shaping the light curtain for autonomous and assisted driving.

Please contact Abhijat or Gustavo for more information.

Relevant publications

Characterizing Drivers’ Peripheral Vision via the Functional Field of View for Intelligent Driving Assistance.
Abhijat Biswas, Henny Admoni. IEEE International Vehicles Symposium (IV). 2023. pdf

Mitigating Causal Confusion in Driving Agents via Gaze Supervision .
Abhijat Biswas, BA Pardhi, Caleb Chuck, Jarrett Holtz, Scott Niekum, Henny Admoni, and Alessandro Allievi. Workshop on Aligning Robot Representation with Humans @ Conference on Robot Learning (CoRL). 2022. pdf

DReyeVR: Democratizing Virtual Reality Driving Simulation for Behavioural & Interaction Research.
Gustavo Silvera*, Abhijat Biswas*, Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2022. pdf


A video showing a social navigation simulator with different navigation algorithms all trying to complete the same navigation scenario.

Evaluation tools for Social Robot Navigation

We are actively maintaining SocNavBench, a social navigation benchmark for evaluating social navigation algorithms against each other in a realistic, consistent, and scalable way. Checkout the github page!

Please contact Abhijat or Gustavo for more information.

Relevant publications

SocNavBench: A Grounded Simulation Testing Framework for Social Navigation.
Abhijat Biswas, Allan Wang, Gustavo Silvera, Aaron Steinfeld, Henny Admoni. ACM Transactions on Human-Robot Interaction (THRI). 2021. pdf


Assistive Manipulation Through Intent Recognition


An upper body mobility limitation can severely impact a person's quality of life. Such limitations can prevent people from performing everyday tasks such as picking up a cup or opening a door. The U.S. Census Bureau has indicated that more than 8.2% of the U.S. population, or 19.9 million Americans, suffer from upper body limitations. Assistive robots offer a way for people with severe mobility impairment to complete daily tasks. However, current assistive robots primarily operate through teleoperation, which requires significant cognitive and physical effort from the user. We explore how these assistive robots can be improved with artificial intelligence to take an active role in helping their users. Drawing from our understanding of human verbal and nonverbal behaviors (like speech and eye gaze) during robot teleoperation, we study how intelligent robots can predict human intent during a task and assist toward task completion. We aim to develop technology to decrease operator fatigue and task duration when using assistive robots by employing human-sensitive shared autonomy.

Ben and Maggie are the contacts on this project.

Relevant publications

Gaze Complements Control Input for Goal Prediction During Assisted Teleoperation.
Reuben M. Aronson, Henny Admoni. Robotics: Science and Systems. 2022. pdfsupplement

HARMONIC: A Multimodal Dataset of Assistive Human–Robot Collaboration.
Benjamin A. Newman *, Reuben M. Aronson *, Siddhartha S. Srinivasa, Kris Kitani, Henny Admoni. The International Journal of Robotics Research (IJRR). 2021. pdf

Inferring Goals with Gaze during Teleoperated Manipulation.
Reuben M. Aronson, Nadia AlMutlak, and Henny Admoni. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2021. pdf

Eye Gaze for Assistive Manipulation.
Reuben M. Aronson, Henny Admoni. HRI Pioneers workshop. 2020. pdf

Semantic Gaze Labeling for Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA). 2019. pdf

Gaze for Error Detection During Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Towards a Framework for Joint Action Workshop at RSS. 2018. pdf

Eye-Hand Behavior in Human-Robot Shared Manipulation.
Reuben M. Aronson, Thiago Santini, Thomas C. Kübler, Enkelejda Kasneci, Siddhartha Srinivasa, and Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2018. pdf


Robot Self-Assesment (MURI)

A Baxter humanoid robot uses its gripper on its arm to stack a Jenga block on top of the tower that a human is constructing.

Even after autonomous agents have learned complex behaviors, we likely won’t rely on them until we can reliably predict their behavior in new situations. Thus we are researching how agents can teach the nuances of their learned behaviors to humans through well-selected demonstrations. We first leverage inverse reinforcement learning and human learning strategies (e.g. scaffolding) to select demonstrations that are both informative and easily understood by humans. We then ask humans to predict agent behavior in unseen environments to test their understanding and inform the next demonstrations to be shown in a closed-loop teaching process. Please contact Michael Lee for more information.

Relevant publications

INQUIRE: INteractive Querying for User-aware Informative REasoning.
Tesca Fitzgerald, Pallavi Koppol, Patrick Callaghan, Russell Quinlan Jun Hei Wong, Reid Simmons, Oliver Kroemer, Henny Admoni.. Conference on Robot Learning (CoRL). 2022. pdf

Reasoning about Counterfactuals to Improve Human Inverse Reinfocement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022. pdf

Metrics for Robot Proficiency Self-Assessment and Communication of Proficiency in Human-Robot Teams.
Adam Norton, Henny Admoni, Jacob Crandall, Tesca Fitzgerald, Alvika Gautam, Michael Goodrich, Amy Saretsky, Matthias Scheutz, Reid Simmons, Aaron Steinfeld, Holly Yanco. ACM Transactions on Human-Robot Interaction (THRI), 11(3). 2022. pdf

Machine Teaching for Human Inverse Reinforcement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. Frontiers in Robotics and AI. 2021. pdf

Interaction Considerations in Learning from Humans.
Pallavi Koppol, Henny Admoni, Reid Simmons. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021. pdf

Understanding the Relationship between Interactions and Outcomes in Human-in-the-Loop Machine Learning.
Yuchen Cui *, Pallavi Koppol *, Henny Admoni, Scott Niekum, Reid Simmons, Aaron Steinfeld, Tesca Fitzgerald. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI). 2021. pdf

Iterative Interactive Reward Learning.
Pallavi Koppol, Henny Admoni, Reid Simmons. Participatory Approaches to Machine Learning Workshop at ICML. 2020. pdf


Audience-Aware Legibility

A view of a legible path which responds to an observer's view, versus one that moves without considering the observer.

Robots often need to communicate their goals to humans when navigating in a shared space to assist observers in anticipating the robot’s future actions. These human observers are often scattered throughout the environment, and each observer only has a partial view of the robot and its movements. A path that non-verbally communicates with multiple observers will need to be sufficiently understood by all of them. We aim to create an algorithm for intent-expressive, legible motion that takes into account the perspectives of multiple observers with limited fields of view in order to balance communicating with multiple observers effectively. Prior work in legible motion does not account for the limited field of view of observers, which can lead to wasted communication efforts that are unobserved by the intended audience, which we have improved upon with observer-aware legibility. Our user studies have shown that audience-aware legibility will require accounting for even more nuanced tradeoffs to enable better performance across multiple observers with different constraints.

Questions can be directed to Ada.

Relevant publications

Observer-Aware Legibility for Social Navigation.
Ada V. Taylor, Ellie Mamantov, Henny Admoni. Proceedings of IEEE International Conference on Robot & Human Interactive Communication (RO-MAN). 2022. pdf

Wait Wait, Nonverbally Tell Me: Legibility for Use in Restaurant Navigation.
Ada V. Taylor, Ellie Mamantov, and Henny Admoni. Workshop on Social Robot Navigation at RSS 2021. 2021. pdf

Now You See It: The Effect of Multiple Audience Perspectives on Path Legibility.
Ada V. Taylor, Henny Admoni. Workshop on AIxFood at IJCAI-PRICAI 2020. 2021.


Mutual Adaptation in Human-Robot Collaboration

Effective team collaboration involves many factors, including understanding capabilities, coordination, and communication. In the context of collaborative tasks, people have different goals, as well as different preferred strategies for accomplishing them. While robots adapt to human partners, humans are simultaneously adapting to the robot. Mutual adaptation occurs when both partners can infer each other’s preferences and adapt their own behavior as necessary. How can robots reason about how its actions will affect a human partner? How can actions be strategically selected to elicit specific behavior from a human partner? We aim to provide insight on team coordination and extend existing frameworks for human-robot collaboration by exploring the effects of communication, collaboration, and mutual coordination on team fluency and performance.

Questions can be directed to Michelle.

Relevant publications

Coordination with Humans via Strategy Matching.
Michelle Zhao, Reid Simmons, Henny Admoni. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2022. pdf

The Role of Adaptation in Collective Human-AI Teaming.
Michelle Zhao, Fade Eadeh, Thuy-Ngoc Nguyen, Pranav Gupta, Henny Admoni, Cleotide Gonzalez, Anita Williams Woolley. Topics in Cognitive Science. 2022. pdf

Teaching agents to understand teamwork: Evaluating and predicting collective intelligence as a latent variable via Hidden Markov Models.
Michelle Zhao, Fade Eadeh, Thuy-Ngoc Nguyen, Pranav Gupta, Henny Admoni, Cleotide Gonzalez, Anita Williams Woolley. Computers in Human Behavior. 2022. pdf

Adapting Language Complexity for AI-Based Assistance.
Michelle Zhao, Reid Simmons, Henny Admoni. Workshop on Lifelong Learning and Personalization in Long-Term Human-Robot Interaction at HRI 2021. 2021. pdf


Learning from Various Forms of Human-Provided Feedback

Humans can convey useful information to one another through a host of seemingly-disparate modalities. For example, if I want to teach someone to set a breakfast table, I could demonstrate how to set the table, I could show the person two table settings and indicate which is my preferred choice (ie. indicate a preference), I could critique a person’s attempted table setting by telling them it was “good” or “bad,” or I could go a step further and correct a person’s attempt to show them where they erred. Each of these interactions is different in form, yet they each convey similar information that is useful to accomplish my teaching goal. When robots join us in daily living, we’ll expect them to interpret and learn from our various forms of communication as we do—a feat that is so effortless for us, but a feat which is still so challenging for them.

This line of research explores learning from different interaction types in the service of two primary questions: How might we ease the challenge of interpretation for the robot? And, how might we ease the teaching burden placed upon the human?

Questions can be directed to Pat Callaghan.