- Ethics and Social Impacts of AI
- Explainable Artificial Intelligence (XAI)
- Public Administration and Political Analysis
- Privacy, Security, and Data Protection
- Psychology of Moral and Emotional Judgment
- Artificial Intelligence in Healthcare and Education
- Topic Modeling
- Adversarial Robustness in Machine Learning
- Neuroethics, Human Enhancement, Biomedical Innovations
- Digitalization, Law, and Regulation
- Sociology and Education Studies
- German Literature and Culture Studies
- Law, AI, and Intellectual Property
- Ethics in Business and Education
- Animal Behavior and Welfare Studies
- Decision-Making and Behavioral Economics
- Natural Language Processing Techniques
- Ethics in Clinical Research
- Blockchain Technology Applications and Security
- Wildlife Ecology and Conservation
- German legal, social, and political studies
- Digital Innovation in Industries
- Technology, Environment, Urban Planning
- Medical Practices and Rehabilitation
- Social Robot Interaction and HRI
University of Stuttgart
2022-2024
University of Tübingen
2016-2023
Bernstein Center for Computational Neuroscience Tübingen
2019
Abstract Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, number ethics guidelines been released recent years. These comprise normative principles recommendations aimed to harness the “disruptive” potentials new technologies. Designed as semi-systematic evaluation, this paper analyzes compares 22 guidelines, highlighting overlaps but also omissions. As result, I give detailed...
Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents review of the key arguments favor and against explainability AI-powered Clinical Decision Support System (CDSS) applied to concrete use case, namely an CDSS currently used emergency call setting identify patients with life-threatening cardiac arrest. More specifically, we performed normative analysis using socio-technical scenarios provide nuanced account role CDSSs allowing abstractions...
Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Thus, aligning them values is great importance. However, given steady increase in reasoning abilities, future LLMs under suspicion becoming able to deceive operators utilizing this ability bypass monitoring efforts. As a prerequisite this, need possess conceptual understanding deception strategies. This study reveals that such strategies emerged state-of-the-art...
Abstract This paper critically discusses blind spots in AI ethics. ethics discourses typically stick to a certain set of topics concerning principles evolving mainly around explainability, fairness, and privacy. All these can be framed way that enables their operationalization by technical means. However, this requires stripping down the multidimensionality very complex social constructs something is idealized, measurable, calculable. Consequently, rather conservative, mainstream notions...
Massive efforts are made to reduce biases in both data and algorithms order render AI applications fair. These propelled by various high-profile cases where biased algorithmic decision-making caused harm women, people of color, minorities, etc. However, the fairness field still succumbs a blind spot, namely its insensitivity discrimination against animals. This paper is first describe 'speciesist bias' investigate it several different systems. Speciesist learned solidified when they trained...
Several seminal ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, widespread criticism has pointed out a lack practical realization these principles. Following that, underwent turn, but without deviating from principled approach many shortcomings associated with it. This paper proposes different approach. It defines four basic virtues, namely justice, honesty, responsibility care, all which represent specific...
Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Due to rapid technological advances their extreme versatility, LLMs nowadays have millions users cusp being main go-to technology for information retrieval, content generation, problem-solving, etc. Therefore, it is great importance thoroughly assess scrutinize capabilities. increasingly complex novel behavioral patterns in current LLMs, this can be done by...
This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of artificial intelligence (AI) system component for healthcare. The explains decisions made by deep learning networks analyzing images skin lesions. trustworthy AI developed here used a holistic approach rather than static ethical checklist and required multidisciplinary team experts working with designers their managers. Ethical, legal, technical issues potentially arising...
Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Therefore, it is great importance to evaluate their emerging abilities. In this study, we show that LLMs like GPT-3 exhibit behavior strikingly resembles human-like intuition - cognitive errors come it. However, higher capabilities, in particular ChatGPT GPT-4, learned avoid succumbing these perform a hyperrational manner. For our experiments, probe Cognitive...
Artificial Intelligence (AI) has the potential to greatly improve delivery of healthcare and other services that advance population health wellbeing. However, use AI in also brings risks may cause unintended harm. To guide future developments AI, High-Level Expert Group on set up by European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These are aimed at a variety stakeholders, especially guiding practitioners toward more ethical robust...
Abstract Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body literature develops new approaches quantify fairness. Here, we investigate how one can divert quantification fairness by describing practice call “fairness hacking” purpose shrouding unfairness algorithms. This impacts end-users who rely on algorithms, as well broader community interested fair AI practices....
The advent of generative artificial intelligence and the widespread adoption it in society engendered intensive debates about its ethical implications risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize recent discourse map normative concepts, we conducted a scoping review on ethics intelligence, including especially large language models text-to-image models. Our analysis provides taxonomy 378 issues 19 topic areas ranks them...
Machine behavior that is based on learning algorithms can be significantly influenced by the exposure to data of different qualities. Up now, those qualities are solely measured in technical terms, but not ethical ones, despite significant role training and annotation supervised machine learning. This first study fill this gap describing new dimensions quality for applications. Based rationale social psychological backgrounds individuals correlate practice with modes...
Abstract Technologies equipped with artificial intelligence (AI) influence our everyday lives in a variety of ways. Due to their contribution greenhouse gas emissions, high use energy, but also impact on fairness issues, these technologies are increasingly discussed the “sustainable AI” discourse. However, current approaches remain anthropocentric. In this article, we argue from perspective applied ethics that such anthropocentric outlook falls short. We present sentientist approach, arguing...
Abstract Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though multitude of guidelines for the design and development such trustworthy AI exist, these focus on high-level abstract requirements systems, it often very difficult assess if specific system fulfills requirements. The Z-Inspection® process provides holistic dynamic framework evaluate trustworthiness at different stages lifecycle, including intended use, design, development....
This report presents an overview of how government, corporations and other actors are approaching the topic Artificial Intelligence (AI) governance ethics across China, Europe, India United States America. Recent policy documents initiatives from these regions, both public sector agencies private companies such as Microsoft documented a brief analysis is offered.
Abstract Recent progress in large language models has led to applications that can (at least) simulate possession of full moral agency due their capacity report context-sensitive assessments open-domain conversations. However, automating decision-making faces several methodological as well ethical challenges. They arise the fields bias mitigation, missing ground truth for “correctness”, effects bounded ethicality machines, changes norms over time, risks using morally informed AI systems...
This paper stresses the importance of biases in field artificial intelligence (AI). To foster efficient algorithmic decision-making complex, unstable, and uncertain real-world environments, we argue for implementation human cognitive learning algorithms. We use insights from science apply them to AI field, combining theoretical considerations with tangible examples depicting promising bias scenarios. Ultimately, this is first tentative step explicitly putting idea forth implement into machines.