Bradley Hayes

ORCID: 0000-0002-0723-1085
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robot Manipulation and Learning
  • Social Robot Interaction and HRI
  • Human-Automation Interaction and Safety
  • AI-based Problem Solving and Planning
  • Reinforcement Learning in Robotics
  • Robotic Path Planning Algorithms
  • Robotics and Automated Systems
  • Teleoperation and Haptic Systems
  • Explainable Artificial Intelligence (XAI)
  • AI in Service Interactions
  • Ethics and Social Impacts of AI
  • Anomaly Detection Techniques and Applications
  • Gaze Tracking and Assistive Technology
  • Multi-Agent Systems and Negotiation
  • Context-Aware Activity Recognition Systems
  • Face Recognition and Perception
  • Software Engineering Research
  • Psychology of Moral and Emotional Judgment
  • Data Stream Mining Techniques
  • Adversarial Robustness in Machine Learning
  • Assembly Line Balancing Optimization
  • Scheduling and Optimization Algorithms
  • Death Anxiety and Social Exclusion
  • Balance, Gait, and Falls Prevention
  • Fault Detection and Control Systems

University of Colorado Boulder
2017-2024

University of Colorado System
2020-2024

Christchurch Hospital
2022-2023

Canterbury District Health Board
2022-2023

Yale University
2013-2020

Massachusetts Institute of Technology
2016-2018

Shared expectations and mutual understanding are critical facets of teamwork. Achieving these in human-robot collaborative contexts can be especially challenging, as humans robots unlikely to share a common language convey intentions, plans, or justifications. Even cases where human co-workers inspect robot's control code, particularly when statistical methods used encode policies, there is no guarantee that meaningful insights into behavior derived will able efficiently isolate the...

10.1145/2909824.3020233 article EN 2017-03-01

Collaboration between humans and robots requires solutions to an array of challenging problems, including multi-agent planning, state estimation, goal inference. There already exist feasible for many these challenges, but they depend upon having rich task models. In this work we detail a novel type Hierarchical Task Network call Clique/Chain HTN (CC-HTN), alongside algorithm autonomously constructing them from topological properties derived graphical representations. As the presented method...

10.1109/icra.2016.7487760 article EN 2016-05-01

We conducted a study to investigate trust in and dependence upon robotic decision support among nurses doctors on labor delivery floor. There is evidence that suggestions provided by embodied agents engender inappropriate degrees of reliance humans. This concern represents critical barrier must be addressed before fielding intelligent hospital service robots take initiative coordinate patient care. our experiment with physicians, evaluated the subjects’ levels high- low-quality...

10.1177/0278364918778344 article EN The International Journal of Robotics Research 2018-06-22

For robots to effectively collaborate with humans, it is critical establish a shared mental model amongst teammates. In the case of incongruous models, catastrophic failures may occur unless mitigating steps are taken. To identify and remedy these potential issues, we propose novel mechanism for enabling an autonomous system detect disparity between itself human collaborator, infer source disagreement within model, evaluate consequences this error, finally, provide human-interpretable...

10.1109/hri.2019.8673104 article EN 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2019-03-01

Nonverbal behaviors increase task efficiency and improve collaboration between people robots. In this paper, we introduce a model for generating nonverbal behavior investigate whether the usefulness of changes based on difficulty. First, detail robot that accounts top-down bottom-up features scene when deciding how to perform deictic references (looking or pointing). Then, analyze robot's affects people's performance memorization under differing difficulty levels. We manipulate in two ways:...

10.1109/hri.2016.7451733 article EN 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2016-03-01

Effective robot collaborators that work with humans require an understanding of the underlying constraint network any joint task to be performed. Discovering this allows agent more effectively plan around co-worker actions or unexpected changes in its environment. To maximize practicality collaborative robots real-world scenarios, should not assumed have abundance either time, patience, prior insight into structure a when relied upon provide training required impart proficiency and...

10.1109/iros.2014.6943191 article EN 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems 2014-09-01

Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with problems of determining how when information should be communicated to others [12]. Current decision support systems (DSS) are still overly simple static, cannot adapt changing environments we expect deploy modern [3], [4], [9], [11]. They intrinsically limited their ability explain rationale versus merely listing future behaviors, limiting a human's...

10.1109/hri.2019.8673198 article EN 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2019-03-01

In this paper, we present Rapid Activity Prediction Through Object-oriented Regression (RAPTOR), a scalable method for performing rapid, real-time activity recognition and prediction that achieves state-of-the-art classification accuracy on both generic human dataset two domain-specific collaborative robotics manufacturing datasets. Our approach is designed to be human-interpretable: able provide explanations its reasoning such non-experts can better understand improve models. We incorporate...

10.1109/icra.2017.7989778 article EN 2017-05-01

In this work, we present an algorithm for improving collaborator performance on sequential manipulation tasks. Our agent-decoupled, optimization-based, task and motion planning approach merges considerations derived from both symbolic geometric domains. This results in the generation of supportive behaviors enabling a teammate to reduce cognitive kinematic burdens during completion. We describe our alongside representative use cases, with evaluation based solving complex circuit building...

10.1109/iros.2015.7354288 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015-09-01

Nonverbal behaviors increase task efficiency and improve collaboration between people robots. In this paper, we introduce a model for generating nonverbal behavior investigate whether the usefulness of changes based on difficulty. First, detail robot that accounts top-down bottom-up features scene when deciding how to perform deictic references (looking or pointing). Then, analyze robot's affects people's performance memorization under differing difficulty levels. We manipulate in two ways:...

10.5555/2906831.2906842 article EN Human-Robot Interaction 2016-03-07

Understanding human intentions is critical for safe and effective human-robot collaboration. While state of the art methods goal prediction utilize learned models to account uncertainty motion data, that data inherently stochastic high variance, hindering those models' utility interactions requiring coordination, including safety-critical or close-proximity tasks. Our key insight robot teammates can deliberately configure shared workspaces prior interaction in order reduce variance motion,...

10.1145/3610977.3635003 preprint EN 2024-03-10

Studies in HRI have shown that people follow and understand robot gaze. However, only a few studies to date examined the time-course of meaningful gaze, none directly investigated what type gaze is best for eliciting perception attention. This paper investigates two types behaviors-short, frequent glances long, less stares - find which behavior better at conveying robot's visual We describe development programmable research platform from MyKeepon toys, use these robots examine effects group...

10.1109/hri.2013.6483614 article EN 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2013-03-01

Learning from Demonstration (LfD) enables novice users to teach robots new skills. However, many LfD methods do not facilitate skill maintenance and adaptation. Changes in task requirements or the environment often reveal lack of resiliency adaptability model. To overcome these limitations, we introduce ARC-LfD: an Augmented Reality (AR) interface for constrained that allows maintain, update, adapt learned This is accomplished through in-situ visualizations skills constraint-based editing...

10.1109/icra48506.2021.9561844 article EN 2021-05-30

Robots that engage in social behaviors benefit greatly from possessing tools allow them to manipulate the course of an interaction. Using a non-anthropomorphic robot and simple counting game, we examine effects empathy-generating dialogue has on participant performance across three conditions. In self-directed condition, petitions reduce his or her so can avoid punishment. externally-directed behalf its programmer The control condition does not involve any for empathy. We find show higher...

10.1109/roman.2014.6926262 article EN 2014-08-01

Learning from demonstration (LfD) has enabled robots to rapidly gain new skills and capabilities by leveraging examples provided novice human operators. While effective, this training mechanism presents the potential for sub-optimal demonstrations negatively impact performance due unintentional operator error. In work we introduce Concept Constrained Demonstration (CC-LfD), a novel algorithm robust skill learning repair that incorporates annotations of conceptually-grounded constraints (in...

10.1109/iros.2018.8594133 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018-10-01

Augmenting a robot with the capacity to understand activities of people it collaborates in order then label and segment those allows generate an efficient safe plan for performing its own actions. In this work, we introduce online activity segmentation algorithm that can detect segments by processing partial trajectory. We model transitions through as hidden Markov model, which runs implementing particle-filtering approach infer maximum posteriori estimate sequence. This process is...

10.1109/icra.2019.8794054 article EN 2022 International Conference on Robotics and Automation (ICRA) 2019-05-01

Justification is an important facet of policy explanation, a process for describing the behavior autonomous system.In human-robot collaboration, agent can attempt to justify distinctly decisions by offering explanations as why those are right or reasonable, leveraging snapshot its internal reasoning do so.Without sufficient insight into robot's decision-making process, it becomes challenging users trust comply with decisions, especially when they viewed confusing contrary user's expectations...

10.15607/rss.2023.xix.002 article EN 2023-07-10

Studies in HRI have shown that people follow and understand robot gaze. However, only a few studies to date examined the time-course of meaningful gaze, none directly investigated what type gaze is best for eliciting perception attention. This paper investigates two types behaviors---short, frequent glances long, less stares---to find which behavior better at conveying robot's visual We describe development programmable research platform from MyKeepon toys, use these robots examine effects...

10.5555/2447556.2447685 article EN Human-Robot Interaction 2013-03-03

No abstract available.

10.1145/2685328.2685335 article FR AI Matters 2014-12-19

Goal-based navigation in public places is critical for independent mobility and breaking barriers that exist blind or visually impaired (BVI) people a sight-centric society. Through this work we present proof-of-concept system autonomously leverages goal-based assistance perception to identify socially preferred seats safely guide its user towards them unknown indoor environments. The robotic includes camera, an IMU, vibrational motors, white cane, powered via backpack-mounted laptop....

10.1109/iros47612.2022.9981219 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022-10-23

Parkinson's disease (PD) is one of the most common neurodegenerative disorders central nervous system, which predominantly affects patients' motor functions, movement, and stability. Monitoring movement in patients with PD crucial for inferring state fluctuations throughout daily life activities, aids progression analysis assessing how respond to medications over time. In this preliminary study, we examine possibility using smart glasses equipped Inertial Measurement Unit (IMU) sensors...

10.23919/mipro57284.2023.10159926 article EN 2023-05-22

Task assignment and scheduling algorithms are powerful tools for autonomously coordinating large teams of robotic or AI agents. However, the decisions these system make often rely on components designed by domain experts, which can be difficult non-technical end-users to understand modify their own ends. In this paper we propose a preliminary design flexible natural language interface task system. The goal our approach is both grant users more control over system's decision process, as well...

10.1609/aaaiss.v2i1.27665 article EN Proceedings of the AAAI Symposium Series 2024-01-22

There is potential for humans and autonomous robots to perform tasks collaboratively as teammates, achieving greater performance than either could on their own. Productive teamwork, however, requires a great deal of coordination, with human robot agents maintaining well-aligned mental models regarding the shared task each agent's role within it. Achieving this live effective communication, especially plans change due shifts in environment knowledge. Our work leverages augmented reality...

10.1145/3610978.3638364 article EN 2024-03-11
Coming Soon ...