Thilo Hagendorff

ORCID: 0000-0002-4633-2153
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Ethics and Social Impacts of AI
  • Explainable Artificial Intelligence (XAI)
  • Public Administration and Political Analysis
  • Privacy, Security, and Data Protection
  • Psychology of Moral and Emotional Judgment
  • Artificial Intelligence in Healthcare and Education
  • Topic Modeling
  • Adversarial Robustness in Machine Learning
  • Neuroethics, Human Enhancement, Biomedical Innovations
  • Digitalization, Law, and Regulation
  • Sociology and Education Studies
  • German Literature and Culture Studies
  • Law, AI, and Intellectual Property
  • Ethics in Business and Education
  • Animal Behavior and Welfare Studies
  • Decision-Making and Behavioral Economics
  • Natural Language Processing Techniques
  • Ethics in Clinical Research
  • Blockchain Technology Applications and Security
  • Wildlife Ecology and Conservation
  • German legal, social, and political studies
  • Digital Innovation in Industries
  • Technology, Environment, Urban Planning
  • Medical Practices and Rehabilitation
  • Social Robot Interaction and HRI

University of Stuttgart
2022-2024

University of Tübingen
2016-2023

Bernstein Center for Computational Neuroscience Tübingen
2019

Abstract Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, number ethics guidelines been released recent years. These comprise normative principles recommendations aimed to harness the “disruptive” potentials new technologies. Designed as semi-systematic evaluation, this paper analyzes compares 22 guidelines, highlighting overlaps but also omissions. As result, I give detailed...

10.1007/s11023-020-09517-8 article EN cc-by Minds and Machines 2020-02-01

Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents review of the key arguments favor and against explainability AI-powered Clinical Decision Support System (CDSS) applied to concrete use case, namely an CDSS currently used emergency call setting identify patients with life-threatening cardiac arrest. More specifically, we performed normative analysis using socio-technical scenarios provide nuanced account role CDSSs allowing abstractions...

10.1371/journal.pdig.0000016 article EN cc-by PLOS Digital Health 2022-02-17

Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Thus, aligning them values is great importance. However, given steady increase in reasoning abilities, future LLMs under suspicion becoming able to deceive operators utilizing this ability bypass monitoring efforts. As a prerequisite this, need possess conceptual understanding deception strategies. This study reveals that such strategies emerged state-of-the-art...

10.1073/pnas.2317967121 article EN cc-by-nc-nd Proceedings of the National Academy of Sciences 2024-06-04

Abstract This paper critically discusses blind spots in AI ethics. ethics discourses typically stick to a certain set of topics concerning principles evolving mainly around explainability, fairness, and privacy. All these can be framed way that enables their operationalization by technical means. However, this requires stripping down the multidimensionality very complex social constructs something is idealized, measurable, calculable. Consequently, rather conservative, mainstream notions...

10.1007/s43681-021-00122-8 article EN cc-by AI and Ethics 2021-12-09

Massive efforts are made to reduce biases in both data and algorithms order render AI applications fair. These propelled by various high-profile cases where biased algorithmic decision-making caused harm women, people of color, minorities, etc. However, the fairness field still succumbs a blind spot, namely its insensitivity discrimination against animals. This paper is first describe 'speciesist bias' investigate it several different systems. Speciesist learned solidified when they trained...

10.1007/s43681-022-00199-9 article EN cc-by AI and Ethics 2022-08-29

Several seminal ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, widespread criticism has pointed out a lack practical realization these principles. Following that, underwent turn, but without deviating from principled approach many shortcomings associated with it. This paper proposes different approach. It defines four basic virtues, namely justice, honesty, responsibility care, all which represent specific...

10.1007/s13347-022-00553-z article EN cc-by Philosophy & Technology 2022-06-21

Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Due to rapid technological advances their extreme versatility, LLMs nowadays have millions users cusp being main go-to technology for information retrieval, content generation, problem-solving, etc. Therefore, it is great importance thoroughly assess scrutinize capabilities. increasingly complex novel behavioral patterns in current LLMs, this can be done by...

10.48550/arxiv.2303.13988 preprint EN other-oa arXiv (Cornell University) 2023-01-01

10.1007/s11023-024-09694-w article EN cc-by Minds and Machines 2024-09-17

This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of artificial intelligence (AI) system component for healthcare. The explains decisions made by deep learning networks analyzing images skin lesions. trustworthy AI developed here used a holistic approach rather than static ethical checklist and required multidisciplinary team experts working with designers their managers. Ethical, legal, technical issues potentially arising...

10.3389/fhumd.2021.688152 article EN cc-by Frontiers in Human Dynamics 2021-07-13

Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Therefore, it is great importance to evaluate their emerging abilities. In this study, we show that LLMs like GPT-3 exhibit behavior strikingly resembles human-like intuition - cognitive errors come it. However, higher capabilities, in particular ChatGPT GPT-4, learned avoid succumbing these perform a hyperrational manner. For our experiments, probe Cognitive...

10.48550/arxiv.2212.05206 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Artificial Intelligence (AI) has the potential to greatly improve delivery of healthcare and other services that advance population health wellbeing. However, use AI in also brings risks may cause unintended harm. To guide future developments AI, High-Level Expert Group on set up by European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These are aimed at a variety stakeholders, especially guiding practitioners toward more ethical robust...

10.3389/fhumd.2021.673104 article EN cc-by Frontiers in Human Dynamics 2021-07-08

Abstract Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body literature develops new approaches quantify fairness. Here, we investigate how one can divert quantification fairness by describing practice call “fairness hacking” purpose shrouding unfairness algorithms. This impacts end-users who rely on algorithms, as well broader community interested fair AI practices....

10.1007/s13347-023-00679-8 article EN cc-by Philosophy & Technology 2024-01-06

The advent of generative artificial intelligence and the widespread adoption it in society engendered intensive debates about its ethical implications risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize recent discourse map normative concepts, we conducted a scoping review on ethics intelligence, including especially large language models text-to-image models. Our analysis provides taxonomy 378 issues 19 topic areas ranks them...

10.48550/arxiv.2402.08323 preprint EN arXiv (Cornell University) 2024-02-13

Machine behavior that is based on learning algorithms can be significantly influenced by the exposure to data of different qualities. Up now, those qualities are solely measured in technical terms, but not ethical ones, despite significant role training and annotation supervised machine learning. This first study fill this gap describing new dimensions quality for applications. Based rationale social psychological backgrounds individuals correlate practice with modes...

10.1007/s11023-021-09573-8 article EN cc-by Minds and Machines 2021-09-26

Abstract Technologies equipped with artificial intelligence (AI) influence our everyday lives in a variety of ways. Due to their contribution greenhouse gas emissions, high use energy, but also impact on fairness issues, these technologies are increasingly discussed the “sustainable AI” discourse. However, current approaches remain anthropocentric. In this article, we argue from perspective applied ethics that such anthropocentric outlook falls short. We present sentientist approach, arguing...

10.1002/sd.2596 article EN cc-by-nc-nd Sustainable Development 2023-05-24

Abstract Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though multitude of guidelines for the design and development such trustworthy AI exist, these focus on high-level abstract requirements systems, it often very difficult assess if specific system fulfills requirements. The Z-Inspection® process provides holistic dynamic framework evaluate trustworthiness at different stages lifecycle, including intended use, design, development....

10.1007/s44206-023-00063-1 article EN cc-by Deleted Journal 2023-09-09

This report presents an overview of how government, corporations and other actors are approaching the topic Artificial Intelligence (AI) governance ethics across China, Europe, India United States America. Recent policy documents initiatives from these regions, both public sector agencies private companies such as Microsoft documented a brief analysis is offered.

10.2139/ssrn.3414805 article EN SSRN Electronic Journal 2019-01-01

Abstract Recent progress in large language models has led to applications that can (at least) simulate possession of full moral agency due their capacity report context-sensitive assessments open-domain conversations. However, automating decision-making faces several methodological as well ethical challenges. They arise the fields bias mitigation, missing ground truth for “correctness”, effects bounded ethicality machines, changes norms over time, risks using morally informed AI systems...

10.1007/s43681-022-00188-y article EN cc-by AI and Ethics 2022-06-22

10.1007/s10676-019-09510-5 article EN Ethics and Information Technology 2019-08-05

This paper stresses the importance of biases in field artificial intelligence (AI). To foster efficient algorithmic decision-making complex, unstable, and uncertain real-world environments, we argue for implementation human cognitive learning algorithms. We use insights from science apply them to AI field, combining theoretical considerations with tangible examples depicting promising bias scenarios. Ultimately, this is first tentative step explicitly putting idea forth implement into machines.

10.1080/0952813x.2023.2178517 article EN Journal of Experimental & Theoretical Artificial Intelligence 2023-02-13
Coming Soon ...