Andrea Bajcsy

ORCID: 0000-0001-7969-9376
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Reinforcement Learning in Robotics
  • Autonomous Vehicle Technology and Safety
  • Robot Manipulation and Learning
  • Robotic Path Planning Algorithms
  • Anomaly Detection Techniques and Applications
  • Gaussian Processes and Bayesian Inference
  • Social Robot Interaction and HRI
  • Adversarial Robustness in Machine Learning
  • Human-Automation Interaction and Safety
  • Ethics and Social Impacts of AI
  • Tactile and Sensory Interactions
  • Traffic and Road Safety
  • Robotics and Sensor-Based Localization
  • Explainable Artificial Intelligence (XAI)
  • Video Surveillance and Tracking Methods
  • Guidance and Control Systems
  • Formal Methods in Verification
  • Adaptive Dynamic Programming Control
  • Human Pose and Action Recognition
  • AI-based Problem Solving and Planning
  • Interactive and Immersive Displays
  • Software Reliability and Analysis Research
  • Safety Systems Engineering in Autonomy
  • Teleoperation and Haptic Systems
  • Target Tracking and Data Fusion in Sensor Networks

Carnegie Mellon University
2024

Berkeley College
2019-2024

University of California, Berkeley
2018-2024

University of Maryland, College Park
2016-2017

In order to safely operate around humans, robots can employ predictive models of human motion. Unfortunately, these cannot capture the full complexity behavior and necessarily introduce simplifying assumptions. As a result, predictions may degrade whenever observed departs from assumed structure, which have negative implications for safety. this paper, we observe that how rational actions appear under particular model be viewed as an indicator model's ability describe human's current By...

10.15607/rss.2018.xiv.069 article EN 2018-06-26

One of the most difficult challenges in robot motion planning is to account for behavior other moving agents, such as humans. Commonly, practitioners employ predictive models reason about where agents are going move. Though there has been much recent work building models, no model ever perfect: an agent can always move unexpectedly, a way that not predicted or assigned sufficient probability. In cases, may plan trajectories appear safe but, fact, lead collision. Rather than trust model’s...

10.1177/0278364919859436 article EN The International Journal of Robotics Research 2019-06-24

Real-world autonomous vehicles often operate in a priori unknown environments. Since most of these systems are safety-critical, it is important to ensure they safely even when faced with environmental uncertainty. Current safety analysis tools enable reason about given full information the state environment priori. However, do not scale well scenarios where being sensed real time, such as during navigation tasks. In this work, we propose novel, real-time method based on Hamilton-Jacobi...

10.1109/cdc40024.2019.9030133 article EN 2019-12-01

We focus on learning robot objective functions from human guidance: specifically, physical corrections provided by the person while is acting. Objective are typically parametrized in terms of features, which capture aspects task that might be important. When intervenes to correct robot»s behavior, should update its understanding features matter, how much, and what way. Unfortunately, real users do not provide optimal isolate exactly was doing wrong. Thus, when receiving a correction, it...

10.1145/3171221.3171267 article EN 2018-02-26

Robust motion planning is a well-studied problem in the robotics literature, yet current algorithms struggle to operate scalably and safely presence of other moving agents, such as humans. This paper introduces novel framework for robot navigation that accounts high-order system dynamics maintains safety external disturbances, robots, Our approach precomputes tracking error margin each robot, generates confidence-aware human predictions, coordinates multiple robots with sequential priority...

10.1109/icra.2019.8794457 article EN 2022 International Conference on Robotics and Automation (ICRA) 2019-05-01

An outstanding challenge with safety methods for human-robot interaction is reducing their conservatism while maintaining robustness to variations in human behavior. In this work, we propose that robots use confidence-aware game-theoretic models of behavior when assessing the a interaction. By treating influence between and robot as well human's rationality unobserved latent states, succinctly infer degree which following model. We leverage model restrict set feasible controls during...

10.1109/icra46639.2022.9812048 article EN 2022 International Conference on Robotics and Automation (ICRA) 2022-05-23

The human input has enabled autonomous systems to improve their capabilities and achieve complex behaviors that are otherwise challenging generate automatically. Recent work focuses on how robots can use such inputs-such as, demonstrations or corrections-to learn intended objectives. These techniques assume the human`s desired objective already exists within robot's hypothesis space. In reality, this assumption is often inaccurate: there will always be situations where person might care...

10.1109/tro.2020.2971415 article EN IEEE Transactions on Robotics 2020-02-25

Contingency planning, wherein an agent generates a set of possible plans conditioned on the outcome uncertain event, is increasingly popular way for robots to act under uncertainty. In this work we take game-theoretic perspective contingency tailored multi-agent scenarios in which robot's actions impact decisions other agents and vice versa. The resulting <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">contingency game</i> allows robot...

10.1109/lra.2024.3354548 article EN IEEE Robotics and Automation Letters 2024-01-16

Hamilton-Jacobi (HJ) reachability is a rigorous mathematical framework that enables robots to simultaneously detect unsafe states and generate actions prevent future failures. While in theory, HJ can synthesize safe controllers for nonlinear systems nonconvex constraints, practice, it has been limited hand-engineered collision-avoidance constraints modeled via low-dimensional state-space representations first-principles dynamics. In this work, our goal generalize robot failures are hard --...

10.48550/arxiv.2502.00935 preprint EN arXiv (Cornell University) 2025-02-02

Real-world autonomous systems often employ probabilistic predictive models of human behavior during planning to reason about their future motion. Since accurately modeling a priori is challenging, such are parameterized, enabling the robot adapt predictions based on observations by maintaining distribution over model parameters. Although this enables data and priors improve model, observation difficult specify may be incorrect, leading erroneous state that can degrade safety motion plan. In...

10.1109/icra40945.2020.9197257 article EN 2020-05-01

Designing human motion predictors which preserve safety while maintaining robot efficiency is an increasingly important challenge for robots operating in close physical proximity to people. One approach use robust control that safeguard against every possible future state, leading safe but often too conservative plans. Alternatively, intent-driven explicitly model how humans make decisions given their intent, efficient However, when the intent misspecified, might confidently plan unsafe...

10.1109/lra.2020.3028049 article EN IEEE Robotics and Automation Letters 2020-09-30

When a robot performs task next to human, physical interaction is inevitable: the human might push, pull, twist, or guide robot. The state of art treats these interactions as disturbances that should reject avoid. At best, robots respond safely while interacts; but after lets go, simply return their original behavior. We recognize human–robot (pHRI) often intentional: intervenes on purpose because not doing correctly. In this article, we argue when pHRI intentional it also informative: can...

10.1177/02783649211050958 article EN The International Journal of Robotics Research 2021-10-25

Humans have internal models of robots (like their physical capabilities), the world what will happen next), and tasks a preferred goal). However, human are not always perfect: for example, it is easy to underestimate robot's inertia. Nevertheless, these change improve over time as humans gather more experience. Interestingly, robot actions influence this experience is, therefore how people's change. In work we take step towards enabling understand they have, leverage better assist people,...

10.1145/3568162.3578629 article EN 2023-03-09

When humans interact with robots influence is inevitable. Consider an autonomous car driving near a human: the speed and steering of will affect how human drives. Prior works have developed frameworks that enable to towards desired behaviors. But while these approaches are effective in short-term (i.e., first few human-robot interactions), here we explore long-term repeated interactions between same robot). Our central insight dynamic: people adapt robots, behaviors which influential now may...

10.1109/icra48891.2023.10160321 article EN 2023-05-29

Predictive human models often need to adapt their parameters online from data. This raises previously ignored safety-related questions for robots relying on these such as what the model could learn and how quickly it it. For instance, when will robot have a confident estimate in nearby human's goal? Or, parameter initializations guarantee that can preferences finite number of observations? To answer analysis questions, our key idea is robot's learning algorithm dynamical system where state...

10.1109/icra48506.2021.9561652 article EN 2021-05-30

Learning robot objective functions from human input has become increasingly important, but state-of-the-art techniques assume that the human's desired lies within robot's hypothesis space. When this is not true, even methods keep track of uncertainty over fail because they reason about which might be correct, and whether any hypotheses are correct. We focus specifically on learning physical corrections during task execution, where having a rich enough space leads to updating its in ways...

10.48550/arxiv.1810.05157 preprint EN other-oa arXiv (Cornell University) 2018-01-01
Coming Soon ...