Aaquib Tabrez

ORCID: 0000-0002-4622-2894
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Human-Automation Interaction and Safety
  • Explainable Artificial Intelligence (XAI)
  • Reinforcement Learning in Robotics
  • AI in Service Interactions
  • Social Robot Interaction and HRI
  • Robot Manipulation and Learning
  • AI-based Problem Solving and Planning
  • Topic Modeling
  • Ethics and Social Impacts of AI
  • Fault Detection and Control Systems
  • Autonomous Vehicle Technology and Safety
  • Complex Systems and Decision Making
  • Adversarial Robustness in Machine Learning
  • Cognitive Science and Mapping
  • Semantic Web and Ontologies
  • Safety Warnings and Signage
  • Robotics and Automated Systems

University of Colorado Boulder
2019-2024

University of Colorado System
2020-2024

For robots to effectively collaborate with humans, it is critical establish a shared mental model amongst teammates. In the case of incongruous models, catastrophic failures may occur unless mitigating steps are taken. To identify and remedy these potential issues, we propose novel mechanism for enabling an autonomous system detect disparity between itself human collaborator, infer source disagreement within model, evaluate consequences this error, finally, provide human-interpretable...

10.1109/hri.2019.8673104 article EN 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2019-03-01

Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with problems of determining how when information should be communicated to others [12]. Current decision support systems (DSS) are still overly simple static, cannot adapt changing environments we expect deploy modern [3], [4], [9], [11]. They intrinsically limited their ability explain rationale versus merely listing future behaviors, limiting a human's...

10.1109/hri.2019.8673198 article EN 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2019-03-01

Justification is an important facet of policy explanation, a process for describing the behavior autonomous system.In human-robot collaboration, agent can attempt to justify distinctly decisions by offering explanations as why those are right or reasonable, leveraging snapshot its internal reasoning do so.Without sufficient insight into robot's decision-making process, it becomes challenging users trust comply with decisions, especially when they viewed confusing contrary user's expectations...

10.15607/rss.2023.xix.002 article EN 2023-07-10

Policy explanation, a process for describing the behavior of an autonomous system, plays crucial role in effectively conveying agent's decision-making rationale to human collaborators and is essential safe real-world deployments. It becomes even more critical effective human-robot teaming, where good communication allows teams adapt improvise successfully during uncertain situations by enabling value alignment within teams. This thesis proposal focuses on improving human-machine teaming...

10.1609/aaai.v38i21.30412 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

The car-to-driver handover is a critically important component of safe autonomous vehicle operation when the unable to safely proceed on its own. Current implementations this in automobiles take form generic alarm indicating an imminent transfer control back human driver. However, certain levels autonomy may allow driver engage other, non-driving related tasks prior handover, leading substantial difficulty quickly regaining situational awareness. This delay re-orientation could potentially...

10.48550/arxiv.2005.04439 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Synchronizing expectations and knowledge about the state of world is an essential capability for effective collaboration. For robots to effectively collaborate with humans other autonomous agents, it critical that they be able generate intelligible explanations reconcile differences between their understanding collaborators. In this work we present Single-shot Policy Explanation Augmenting Rewards (SPEAR), a novel sequential optimization algorithm uses semantic derived from combinations...

10.48550/arxiv.2101.01860 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Today it seems even more evident that social robots will have a integral role to play in the real-world scenarios and need participate full richness of human society. Central success being socially intelligent agents is insuring effective interactions between humans robots. In order achieve goal, researchers engineers from both industry academia come together share ideas, trials, failures, successes. This workshops aims at creating bridge as such community tackle current future challenges...

10.1145/3434074.3444874 preprint EN 2021-03-07

Developments in human-robot teaming have given rise to significant interest training methods that enable collaborative agents safely and successfully execute tasks alongside human teammates. While effective, many existing are brittle changes the environment do not account for preferences of collaborators. This ineffectiveness is typically due complexity deployment environments unique personal These complications lead behavior can cause task failure or user discomfort. In this work, we...

10.1109/iros51168.2021.9636375 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021-09-27
Coming Soon ...