Jakob Schoeffer

ORCID: 0000-0003-3705-7126
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Ethics and Social Impacts of AI
  • Explainable Artificial Intelligence (XAI)
  • Decision-Making and Behavioral Economics
  • Psychology of Moral and Emotional Judgment
  • Adversarial Robustness in Machine Learning
  • Game Theory and Applications
  • Electoral Systems and Political Participation
  • Opinion Dynamics and Social Influence
  • Constraint Satisfaction and Optimization
  • Innovation, Sustainability, Human-Machine Systems
  • Mental Health Research Topics
  • Intelligent Tutoring Systems and Adaptive Learning
  • Recommender Systems and Techniques
  • Manufacturing Process and Optimization
  • Artificial Intelligence in Healthcare and Education
  • Privacy, Security, and Data Protection
  • Industrial Vision Systems and Defect Detection
  • AI-based Problem Solving and Planning
  • Auction Theory and Applications
  • Speech and dialogue systems
  • Privacy-Preserving Technologies in Data
  • Advanced Statistical Process Monitoring

University of Groningen
2025

Kandilli Observatory and Earthquake Research Institute
2025

The University of Texas at Austin
2022-2024

Karlsruhe Institute of Technology
2021-2023

In this work, we study the effects of feature-based explanations on distributive fairness AI-assisted decisions, specifically focusing task predicting occupations from short textual bios. We also investigate how any are mediated by humans' perceptions and their reliance AI recommendations. Our findings show that influence perceptions, which, in turn, relate to tendency adhere However, see such do not enable humans discern correct incorrect Instead, they may affect irrespective correctness...

10.1145/3613904.3642621 article EN other-oa 2024-05-11

Automated decision systems (ADS) are increasingly used for consequential decision-making. These often rely on sophisticated yet opaque machine learning models, which do not allow understanding how a given was arrived at. In this work, we conduct human subject study to assess people's perceptions of informational fairness (i.e., whether people think they adequate information and explanation the process its outcomes) trustworthiness an underlying ADS when provided with varying types about...

10.1145/3531146.3533218 article EN 2022 ACM Conference on Fairness, Accountability, and Transparency 2022-06-20

In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle multidimensional these two concepts. Based a systematic literature review subsequent qualitative content analysis, identify seven archetypal from 175 scientific articles alleged benefits of XAI. We present crucial caveats with respect provide an entry point for future discussions around potentials limitations XAI specific desiderata. Importantly, notice that are...

10.1145/3630106.3658990 article EN 2022 ACM Conference on Fairness, Accountability, and Transparency 2024-06-03

In AI-assisted decision-making, explanations are often touted for their alleged potential to enable humans mitigate algorithmic unfairness. this work, we study the effects of feature-based explanations—which highlight features that were used an AI prediction—on human reliance on recommendations and implications (un)fairness decisions. We specifically focus task predicting occupations from textual biographies via a series randomized online experiments (n=1207). find do not distinguish correct...

10.2139/ssrn.5029746 preprint EN 2025-01-01

In AI-assisted decision-making, a central promise of having human-in-the-loop is that they should be able to complement the AI system by overriding its wrong recommendations. practice, however, we often see humans cannot assess correctness recommendations and, as result, adhere or override correct advice. Different ways relying on have immediate, yet distinct, implications for decision quality. Unfortunately, reliance and quality are inappropriately conflated in current literature...

10.31219/osf.io/cekm9_v2 preprint EN 2025-02-06

It is often argued that one goal of explaining automated decision systems (ADS) to facilitate positive perceptions (e.g., fairness or trustworthiness) users towards such systems. This viewpoint, however, makes the implicit assumption a given ADS fair and trustworthy, begin with. If issues unfair outcomes, then might expect explanations regarding system's workings will reveal its shortcomings and, hence, lead decrease in perceptions. Consequently, we suggest it more meaningful evaluate...

10.1145/3462204.3481742 preprint EN 2021-10-22

Automated decision systems (ADS) have become ubiquitous in many high-stakes domains.Those typically involve sophisticated yet opaque artificial intelligence (AI) techniques that seldom allow for full comprehension of their inner workings, particularly affected individuals.As a result, ADS are prone to deficient oversight and calibration, which can lead undesirable (e.g., unfair) outcomes.In this work, we conduct an online study with 200 participants examine people's perceptions fairness...

10.24251/hicss.2022.134 article EN Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System Sciences 2022-01-01

Automated decision systems are increasingly used for consequential making -- a variety of reasons. These often rely on sophisticated yet opaque models, which do not (or hardly) allow understanding how or why given was arrived at. This is only problematic from legal perspective, but non-transparent also prone to yield undesirable (e.g., unfair) outcomes because their sanity difficult assess and calibrate in the first place. In this work, we conduct study evaluate different attempts explaining...

10.48550/arxiv.2103.04757 preprint EN cc-by arXiv (Cornell University) 2021-01-01

In AI-assisted decision-making, a central promise of having human-in-the-loop is that they should be able to complement the AI system by overriding its wrong recommendations. practice, however, we often see humans cannot assess correctness recommendations and, as result, adhere or override correct advice. Different ways relying on have immediate, yet distinct, implications for decision quality. Unfortunately, reliance and quality are inappropriately conflated in current literature...

10.31219/osf.io/cekm9 preprint EN 2024-08-25

In the wake of increasing political extremism, online platforms have been criticized for contributing to polarization. One line criticism has focused on echo chambers and recommended content served users by these platforms. this work, we introduce fair exposure problem: given limited intervention power platform, goal is enforce balance in spread (e.g., news articles) among two groups through constraints similar those imposed Fairness Doctrine United States past. Groups are characterized...

10.1609/aaai.v37i10.26404 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2023-06-26

In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle multidimensional these two concepts. Based a systematic literature review subsequent qualitative content analysis, identify seven archetypal from 175 scientific articles alleged benefits of XAI. We present crucial caveats with respect provide an entry point for future discussions around potentials limitations XAI specific desiderata. Importantly, notice that are...

10.48550/arxiv.2310.13007 preprint EN cc-by-nc-nd arXiv (Cornell University) 2023-01-01

Algorithmic decision systems are increasingly used in areas such as hiring, school admission, or loan approval. Typically, these rely on labeled data for training a classification model. However, many scenarios, ground-truth labels unavailable, and instead we have only access to imperfect the result of (potentially biased) human-made decisions. Despite being imperfect, historical decisions often contain some useful information unobserved true labels. In this paper, focus scenarios where...

10.1145/3460112.3471950 preprint EN 2021-06-28

Computer-generated imagery of car models has become an indispensable part manufacturers' advertisement concepts.They are for instance used in configurators to offer customers the possibility configure their online according personal preferences.However, human-led quality assurance faces challenge keep up with high-volume visual inspections due models' increasing complexity.Even though application machine learning many inspection tasks demonstrated great success, its need large labeled data...

10.24251/hicss.2022.153 article EN Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System Sciences 2022-01-01

It is known that recommendations of AI-based systems can be incorrect or unfair. Hence, it often proposed a human the final decision-maker. Prior work has argued explanations are an essential pathway to help decision-makers enhance decision quality and mitigate bias, i.e., facilitate human-AI complementarity. For these benefits materialize, should enable humans appropriately rely on AI override algorithmic recommendation when necessary increase distributive fairness decisions. The...

10.48550/arxiv.2204.13156 preprint EN cc-by-nc-nd arXiv (Cornell University) 2022-01-01

In AI-assisted decision-making, a central promise of putting human in the loop is that they should be able to complement AI system by adhering its correct and overriding mistaken recommendations. practice, however, we often see humans tend over- or under-rely on recommendations, meaning either adhere wrong override Such reliance behavior detrimental decision-making accuracy. this work, articulate analyze interdependence between accuracy which has been largely neglected prior work. We also...

10.48550/arxiv.2304.08804 preprint EN cc-by-nc-nd arXiv (Cornell University) 2023-01-01

Automated decision systems (ADS) are increasingly used for consequential decision-making. These often rely on sophisticated yet opaque machine learning models, which do not allow understanding how a given was arrived at. This is only problematic from legal perspective, but non-transparent also prone to yield unfair outcomes because their sanity challenging assess and calibrate in the first place -- particularly worrisome human decision-subjects. Based this observation building upon existing...

10.1145/3491101.3503811 article EN CHI Conference on Human Factors in Computing Systems Extended Abstracts 2022-04-27

In AI-assisted decision-making, a central promise of having human-in-the-loop is that they should be able to complement the AI system by overriding its wrong recommendations. practice, however, we often see humans cannot assess correctness recommendations and, as result, adhere or override correct advice. Different ways relying on have immediate, yet distinct, implications for decision quality. Unfortunately, reliance and quality are inappropriately conflated in current literature...

10.31219/osf.io/cekm9_v1 preprint EN 2024-08-25

A critical factor in the success of many decision support systems is accurate modeling user preferences. Psychology research has demonstrated that users often develop their preferences during elicitation process, highlighting pivotal role system-user interaction developing personalized systems. This paper introduces a novel approach, combining Large Language Models (LLMs) with Constraint Programming to facilitate interactive support. We study this hybrid framework through lens meeting...

10.1145/3685053 article EN ACM Transactions on Interactive Intelligent Systems 2024-08-01

Automated decision systems (ADS) have become ubiquitous in many high-stakes domains. Those typically involve sophisticated yet opaque artificial intelligence (AI) techniques that seldom allow for full comprehension of their inner workings, particularly affected individuals. As a result, ADS are prone to deficient oversight and calibration, which can lead undesirable (e.g., unfair) outcomes. In this work, we conduct an online study with 200 participants examine people's perceptions fairness...

10.48550/arxiv.2109.05792 preprint EN cc-by-nc-nd arXiv (Cornell University) 2021-01-01

In this work, we study the effects of feature-based explanations on distributive fairness AI-assisted decisions, specifically focusing task predicting occupations from short textual bios. We also investigate how any are mediated by humans' perceptions and their reliance AI recommendations. Our findings show that influence perceptions, which, in turn, relate to tendency adhere However, see such do not enable humans discern correct incorrect Instead, they may affect irrespective correctness...

10.48550/arxiv.2209.11812 preprint EN cc-by-nc-nd arXiv (Cornell University) 2022-01-01

A critical factor in the success of decision support systems is accurate modeling user preferences. Psychology research has demonstrated that users often develop their preferences during elicitation process, highlighting pivotal role system-user interaction developing personalized systems. This paper introduces a novel approach, combining Large Language Models (LLMs) with Constraint Programming to facilitate interactive support. We study this hybrid framework through lens meeting scheduling,...

10.48550/arxiv.2312.06908 preprint EN other-oa arXiv (Cornell University) 2023-01-01

In the wake of increasing political extremism, online platforms have been criticized for contributing to polarization. One line criticism has focused on echo chambers and recommended content served users by these platforms. this work, we introduce fair exposure problem: given limited intervention power platform, goal is enforce balance in spread (e.g., news articles) among two groups through constraints similar those imposed Fairness Doctrine United States past. Groups are characterized...

10.48550/arxiv.2202.09727 preprint EN cc-by-nc-nd arXiv (Cornell University) 2022-01-01
Coming Soon ...