- Ethics and Social Impacts of AI
- Explainable Artificial Intelligence (XAI)
- Decision-Making and Behavioral Economics
- Psychology of Moral and Emotional Judgment
- Adversarial Robustness in Machine Learning
- Game Theory and Applications
- Electoral Systems and Political Participation
- Opinion Dynamics and Social Influence
- Constraint Satisfaction and Optimization
- Innovation, Sustainability, Human-Machine Systems
- Mental Health Research Topics
- Intelligent Tutoring Systems and Adaptive Learning
- Recommender Systems and Techniques
- Manufacturing Process and Optimization
- Artificial Intelligence in Healthcare and Education
- Privacy, Security, and Data Protection
- Industrial Vision Systems and Defect Detection
- AI-based Problem Solving and Planning
- Auction Theory and Applications
- Speech and dialogue systems
- Privacy-Preserving Technologies in Data
- Advanced Statistical Process Monitoring
University of Groningen
2025
Kandilli Observatory and Earthquake Research Institute
2025
The University of Texas at Austin
2022-2024
Karlsruhe Institute of Technology
2021-2023
In this work, we study the effects of feature-based explanations on distributive fairness AI-assisted decisions, specifically focusing task predicting occupations from short textual bios. We also investigate how any are mediated by humans' perceptions and their reliance AI recommendations. Our findings show that influence perceptions, which, in turn, relate to tendency adhere However, see such do not enable humans discern correct incorrect Instead, they may affect irrespective correctness...
Automated decision systems (ADS) are increasingly used for consequential decision-making. These often rely on sophisticated yet opaque machine learning models, which do not allow understanding how a given was arrived at. In this work, we conduct human subject study to assess people's perceptions of informational fairness (i.e., whether people think they adequate information and explanation the process its outcomes) trustworthiness an underlying ADS when provided with varying types about...
In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle multidimensional these two concepts. Based a systematic literature review subsequent qualitative content analysis, identify seven archetypal from 175 scientific articles alleged benefits of XAI. We present crucial caveats with respect provide an entry point for future discussions around potentials limitations XAI specific desiderata. Importantly, notice that are...
In AI-assisted decision-making, explanations are often touted for their alleged potential to enable humans mitigate algorithmic unfairness. this work, we study the effects of feature-based explanations—which highlight features that were used an AI prediction—on human reliance on recommendations and implications (un)fairness decisions. We specifically focus task predicting occupations from textual biographies via a series randomized online experiments (n=1207). find do not distinguish correct...
In AI-assisted decision-making, a central promise of having human-in-the-loop is that they should be able to complement the AI system by overriding its wrong recommendations. practice, however, we often see humans cannot assess correctness recommendations and, as result, adhere or override correct advice. Different ways relying on have immediate, yet distinct, implications for decision quality. Unfortunately, reliance and quality are inappropriately conflated in current literature...
It is often argued that one goal of explaining automated decision systems (ADS) to facilitate positive perceptions (e.g., fairness or trustworthiness) users towards such systems. This viewpoint, however, makes the implicit assumption a given ADS fair and trustworthy, begin with. If issues unfair outcomes, then might expect explanations regarding system's workings will reveal its shortcomings and, hence, lead decrease in perceptions. Consequently, we suggest it more meaningful evaluate...
Automated decision systems (ADS) have become ubiquitous in many high-stakes domains.Those typically involve sophisticated yet opaque artificial intelligence (AI) techniques that seldom allow for full comprehension of their inner workings, particularly affected individuals.As a result, ADS are prone to deficient oversight and calibration, which can lead undesirable (e.g., unfair) outcomes.In this work, we conduct an online study with 200 participants examine people's perceptions fairness...
Automated decision systems are increasingly used for consequential making -- a variety of reasons. These often rely on sophisticated yet opaque models, which do not (or hardly) allow understanding how or why given was arrived at. This is only problematic from legal perspective, but non-transparent also prone to yield undesirable (e.g., unfair) outcomes because their sanity difficult assess and calibrate in the first place. In this work, we conduct study evaluate different attempts explaining...
In AI-assisted decision-making, a central promise of having human-in-the-loop is that they should be able to complement the AI system by overriding its wrong recommendations. practice, however, we often see humans cannot assess correctness recommendations and, as result, adhere or override correct advice. Different ways relying on have immediate, yet distinct, implications for decision quality. Unfortunately, reliance and quality are inappropriately conflated in current literature...
In the wake of increasing political extremism, online platforms have been criticized for contributing to polarization. One line criticism has focused on echo chambers and recommended content served users by these platforms. this work, we introduce fair exposure problem: given limited intervention power platform, goal is enforce balance in spread (e.g., news articles) among two groups through constraints similar those imposed Fairness Doctrine United States past. Groups are characterized...
In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle multidimensional these two concepts. Based a systematic literature review subsequent qualitative content analysis, identify seven archetypal from 175 scientific articles alleged benefits of XAI. We present crucial caveats with respect provide an entry point for future discussions around potentials limitations XAI specific desiderata. Importantly, notice that are...
Algorithmic decision systems are increasingly used in areas such as hiring, school admission, or loan approval. Typically, these rely on labeled data for training a classification model. However, many scenarios, ground-truth labels unavailable, and instead we have only access to imperfect the result of (potentially biased) human-made decisions. Despite being imperfect, historical decisions often contain some useful information unobserved true labels. In this paper, focus scenarios where...
Computer-generated imagery of car models has become an indispensable part manufacturers' advertisement concepts.They are for instance used in configurators to offer customers the possibility configure their online according personal preferences.However, human-led quality assurance faces challenge keep up with high-volume visual inspections due models' increasing complexity.Even though application machine learning many inspection tasks demonstrated great success, its need large labeled data...
It is known that recommendations of AI-based systems can be incorrect or unfair. Hence, it often proposed a human the final decision-maker. Prior work has argued explanations are an essential pathway to help decision-makers enhance decision quality and mitigate bias, i.e., facilitate human-AI complementarity. For these benefits materialize, should enable humans appropriately rely on AI override algorithmic recommendation when necessary increase distributive fairness decisions. The...
In AI-assisted decision-making, a central promise of putting human in the loop is that they should be able to complement AI system by adhering its correct and overriding mistaken recommendations. practice, however, we often see humans tend over- or under-rely on recommendations, meaning either adhere wrong override Such reliance behavior detrimental decision-making accuracy. this work, articulate analyze interdependence between accuracy which has been largely neglected prior work. We also...
Automated decision systems (ADS) are increasingly used for consequential decision-making. These often rely on sophisticated yet opaque machine learning models, which do not allow understanding how a given was arrived at. This is only problematic from legal perspective, but non-transparent also prone to yield unfair outcomes because their sanity challenging assess and calibrate in the first place -- particularly worrisome human decision-subjects. Based this observation building upon existing...
In AI-assisted decision-making, a central promise of having human-in-the-loop is that they should be able to complement the AI system by overriding its wrong recommendations. practice, however, we often see humans cannot assess correctness recommendations and, as result, adhere or override correct advice. Different ways relying on have immediate, yet distinct, implications for decision quality. Unfortunately, reliance and quality are inappropriately conflated in current literature...
A critical factor in the success of many decision support systems is accurate modeling user preferences. Psychology research has demonstrated that users often develop their preferences during elicitation process, highlighting pivotal role system-user interaction developing personalized systems. This paper introduces a novel approach, combining Large Language Models (LLMs) with Constraint Programming to facilitate interactive support. We study this hybrid framework through lens meeting...
Automated decision systems (ADS) have become ubiquitous in many high-stakes domains. Those typically involve sophisticated yet opaque artificial intelligence (AI) techniques that seldom allow for full comprehension of their inner workings, particularly affected individuals. As a result, ADS are prone to deficient oversight and calibration, which can lead undesirable (e.g., unfair) outcomes. In this work, we conduct an online study with 200 participants examine people's perceptions fairness...
In this work, we study the effects of feature-based explanations on distributive fairness AI-assisted decisions, specifically focusing task predicting occupations from short textual bios. We also investigate how any are mediated by humans' perceptions and their reliance AI recommendations. Our findings show that influence perceptions, which, in turn, relate to tendency adhere However, see such do not enable humans discern correct incorrect Instead, they may affect irrespective correctness...
A critical factor in the success of decision support systems is accurate modeling user preferences. Psychology research has demonstrated that users often develop their preferences during elicitation process, highlighting pivotal role system-user interaction developing personalized systems. This paper introduces a novel approach, combining Large Language Models (LLMs) with Constraint Programming to facilitate interactive support. We study this hybrid framework through lens meeting scheduling,...
In the wake of increasing political extremism, online platforms have been criticized for contributing to polarization. One line criticism has focused on echo chambers and recommended content served users by these platforms. this work, we introduce fair exposure problem: given limited intervention power platform, goal is enforce balance in spread (e.g., news articles) among two groups through constraints similar those imposed Fairness Doctrine United States past. Groups are characterized...