Patricia K. Kahr

ORCID: 0000-0002-1368-8698
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Explainable Artificial Intelligence (XAI)
  • Ethics and Social Impacts of AI
  • Decision-Making and Behavioral Economics
  • Human-Automation Interaction and Safety
  • Online Learning and Analytics
  • Intelligent Tutoring Systems and Adaptive Learning

Eindhoven University of Technology
2023-2025

Ethical considerations, including transparency, play an important role when using artificial intelligence (AI) in education. Explainable AI has been coined as a solution to provide more insight into the inner workings of algorithms. However, carefully designed user studies on how design explanations for education are still limited. The current study aimed identify effect automated essay scoring system students’ trust and motivation. were needs-elicitation with students combination guidelines...

10.18608/jla.2023.7801 article EN Journal of Learning Analytics 2023-03-09

Humans increasingly interact with AI systems, and successful interactions rely on individuals trusting such systems (when appropriate). Considering that trust is fragile often cannot be restored quickly, we focus how develops over time in a human-AI-interaction scenario. In 2x2 between-subject experiment, test model accuracy (high vs. low) type of explanation (human-like not) affect time. We study complex decision-making task which estimate jail for 20 criminal law cases advice. Results show...

10.1145/3581641.3584058 article EN 2023-03-27

Complementing human decision-making with AI advice offers substantial advantages. However, humans do not always trust appropriately and are overly sensitive to incidental errors, even in cases overall good performance. Today's research still needs uncover the underlying aspects of decline recovery over time repeated human-AI interactions. Our work investigates consequences error on (self-reported) participants' reliance advice. Results from our experiment, where 208 participants evaluated 14...

10.1145/3640543.3645167 article EN cc-by 2024-03-18

People are increasingly interacting with AI systems, but successful interactions depend on people trusting these systems only when appropriate. Since neither gaining trust in advice nor restoring lost after mistakes is warranted, we seek to better understand the development of and reliance sequential human-AI interaction scenarios. In a 2x2 between-subject simulated experiment, tested how model accuracy (high vs. low) explanation type (human-like abstract) affect for repeated interactions....

10.1145/3686164 article EN ACM Transactions on Interactive Intelligent Systems 2024-08-02

Ethical considerations, including transparency, play an important role when using artificial intelligence (AI) in education. Explainable AI has been coined as a solution to provide more insight into the inner workings of algorithms. However, carefully designed user studies on how design explanations for education are still limited. The current study aimed identify effect automated essay scoring system students' trust and motivation. were needs-elicitation with students combination guidelines...

10.31234/osf.io/tgpf4 preprint EN 2023-01-23

Humans increasingly interact with AI systems, and successful inter- actions rely on individuals trusting such systems (when appropri- ate). Considering that trust is fragile often cannot be restored quickly, we focus how develops over time in a human- AI-interaction scenario. In 2x2 between-subject experiment, test model accuracy (high vs. low) type of explanation (human-like not) affect time. We study complex decision-making task which estimate jail for 20 criminal law cases advice. Results...

10.31234/osf.io/9zr3u preprint EN 2023-02-28

Complementing human decision-making with AI advice offers substantial advantages. However, humans do not always trust appropriately and are overly sensitive to incidental errors, even in cases overall good performance. Today’s research still needs uncover the underlying aspects of decline recovery over time repeated human-AI interactions. Our work investigates consequences error on (self-reported) participants’ reliance advice. Results from our experiment, where 208 participants evaluated 14...

10.31219/osf.io/5awrs preprint EN 2024-01-26
Coming Soon ...