Geoff Keeling

ORCID: 0000-0003-3251-4981
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Ethics and Social Impacts of AI
  • Psychology of Moral and Emotional Judgment
  • Artificial Intelligence in Healthcare and Education
  • Neuroethics, Human Enhancement, Biomedical Innovations
  • Adversarial Robustness in Machine Learning
  • Topic Modeling
  • Ethics in medical practice
  • Ethics in Clinical Research
  • Natural Language Processing Techniques
  • Ethics in Business and Education
  • Free Will and Agency
  • Explainable Artificial Intelligence (XAI)
  • Blockchain Technology Applications and Security
  • Machine Learning in Healthcare
  • Epistemology, Ethics, and Metaphysics
  • Philosophy and Theoretical Science
  • Misinformation and Its Impacts
  • Robotic Path Planning Algorithms
  • Economic Development and Digital Transformation
  • Grief, Bereavement, and Mental Health
  • Transportation and Mobility Innovations
  • Healthcare cost, quality, practices
  • Palliative Care and End-of-Life Issues
  • Deception detection and forensic psychology
  • Ethics and Legal Issues in Pediatric Healthcare

Google (United Kingdom)
2023-2025

Google (United States)
2023-2024

Stanford University
2021-2022

Leverhulme Trust
2019-2022

University of Cambridge
2019-2022

University of Bristol
2017-2019

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and adaptable to wide range downstream tasks. We call these foundation underscore their critically central yet incomplete character. This report provides thorough account opportunities risks models, ranging from capabilities language, vision, robotics, reasoning, human interaction) technical principles(e.g., model architectures, training procedures, data, systems,...

10.48550/arxiv.2108.07258 preprint EN cc-by arXiv (Cornell University) 2021-01-01

This paper argues against the view that trolley cases are of little or no relevance to ethics automated vehicles. Four arguments for this outlined and rejected: Not Going Happen Argument, Moral Difference Impossible Deliberation Argument Wrong Question Argument. In making clear where these go wrong, a positive account is developed how can inform

10.1007/s11948-019-00096-1 article EN cc-by Science and Engineering Ethics 2019-03-04

BackgroundArtificial intelligence (AI) has repeatedly been shown to encode historical inequities in healthcare. We aimed develop a framework quantitatively assess the performance equity of health AI technologies and illustrate its utility via case study.MethodsHere, we propose methodology whether prioritise for patient populations experiencing worse outcomes, that is complementary existing fairness metrics. developed Health Equity Assessment machine Learning (HEAL) designed four-step...

10.1016/j.eclinm.2024.102479 article EN cc-by-nc-nd EClinicalMedicine 2024-03-14

This paper focuses on the opportunities and ethical societal risks posed by advanced AI assistants. We define assistants as artificial agents with natural language interfaces, whose function is to plan execute sequences of actions behalf a user, across one or more domains, in line user's expectations. The starts considering technology itself, providing an overview assistants, their technical foundations potential range applications. It then explores questions around value alignment,...

10.48550/arxiv.2404.16244 preprint EN arXiv (Cornell University) 2024-04-24

This paper examines the extent to which large language models (LLMs) have developed higher-order theory of mind (ToM); human ability reason about multiple mental and emotional states in a recursive manner (e.g. I think that you believe she knows). builds on prior work by introducing handwritten test suite -- Multi-Order Theory Mind Q&A using it compare performance five LLMs newly gathered adult benchmark. We find GPT-4 Flan-PaLM reach adult-level near ToM tasks overall, exceeds 6th order...

10.48550/arxiv.2405.18870 preprint EN arXiv (Cornell University) 2024-05-29

Credences are mental states corresponding to degrees of confidence in propositions. Attribution credences Large Language Models (LLMs) is commonplace the empirical literature on LLM evaluation. Yet theoretical basis for credence attribution unclear. We defend three claims. First, our semantic claim that attributions (at least general) correctly interpreted literally, as expressing truth-apt beliefs part scientists purport describe facts about credences. Second, metaphysical existence at...

10.1080/0020174x.2025.2450598 article EN Inquiry 2025-01-14

10.1007/s11098-025-02300-4 article EN cc-by-nc-nd Philosophical Studies 2025-03-30

As AI assistants become increasingly sophisticated and deeply integrated into our lives, questions of trust rise to the forefront. In this paper, we build on philosophical studies investigate when user in is justified. By moving beyond a focus technical artefact isolation, consider broader societal system which are developed deployed. We conceptualise as encompassing two main targets, namely their developers. argue that – more human like exhibit increased agency discerning justified requires...

10.1145/3630106.3658964 article EN cc-by 2022 ACM Conference on Fairness, Accountability, and Transparency 2024-06-03

Abstract The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could principle combat One concern about these is their performance patients traditionally disadvantaged groups exceeds advantaged groups. This renders the algorithmic decisions unfair relative to standard fairness metrics learning. In this paper, we defend permissible affirmative algorithms;...

10.1007/s10676-022-09658-7 article EN cc-by Ethics and Information Technology 2022-08-31

Abstract The application of machine-learning technologies to medical practice promises enhance the capabilities healthcare professionals in assessment, diagnosis, and treatment, conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters we make precise different respects which can arise medicine, also clear normative relevance these kinds for broader questions about justice fairness healthcare. In this...

10.1017/s0963180121000839 article EN Cambridge Quarterly of Healthcare Ethics 2022-01-01

Recent generative AI systems have demonstrated more advanced persuasive capabilities and are increasingly permeating areas of life where they can influence decision-making. Generative presents a new risk profile persuasion due the opportunity for reciprocal exchange prolonged interactions. This has led to growing concerns about harms from how be mitigated, highlighting need systematic study persuasion. The current definitions unclear related insufficiently studied. Existing harm mitigation...

10.48550/arxiv.2404.15058 preprint EN arXiv (Cornell University) 2024-04-23

Suppose a driverless car encounters scenario where (i) harm to at least one person is unavoidable and (ii) choice about how distribute harms between different persons required. How should the be programmed behave in this situation? I call moral design problem. Santoni de Sio (Ethical Theory Moral Pract 20:411–429, 2017) defends legal-philosophical approach problem, which aims bring us consensus on problem despite our disagreements principles provide correct account of justified harm. He then...

10.1007/s10677-018-9887-5 article EN cc-by Ethical Theory and Moral Practice 2018-04-01

Abstract We propose a ‘Moral Imagination’ methodology to facilitate culture of responsible innovation for engineering and product teams in technology companies. Our approach has been operationalized over the past two years at Google, where we have conducted 60 workshops with from across organization. argue that our is crucial complement existing formal informal initiatives fostering ethical awareness, deliberation, decision-making design such as company principles, ethics privacy review...

10.1007/s43681-023-00381-7 article EN cc-by AI and Ethics 2023-12-19

The development of increasingly agentic and human-like AI assistants, capable performing a wide range tasks on user's behalf over time, has sparked heightened interest in the nature bounds human interactions with AI. Such systems may indeed ground transition from task-oriented AI, at discrete time intervals, to ongoing relationships -- where users develop deeper sense connection attachment technology. This paper investigates what it means for between advanced assistants be appropriate...

10.1609/aies.v7i1.31694 article EN 2024-10-16

With the growing popularity of dialogue agents based on large language models (LLMs), urgent attention has been drawn to finding ways ensure their behaviour is ethical and appropriate. These are largely interpreted in terms 'HHH' criteria: making outputs more helpful honest, avoiding harmful (biased, toxic, or inaccurate) statements. Whilst this semantic focus useful from perspective viewing LLM as mere mediums for information, it fails account pragmatic factors that can make same utterance...

10.48550/arxiv.2401.09082 preprint EN cc-by arXiv (Cornell University) 2024-01-01

Credences are mental states corresponding to degrees of confidence in propositions. Attribution credences Large Language Models (LLMs) is commonplace the empirical literature on LLM evaluation. Yet theoretical basis for credence attribution unclear. We defend three claims. First, our semantic claim that attributions (at least general) correctly interpreted literally, as expressing truth-apt beliefs part scientists purport describe facts about credences. Second, metaphysical existence at...

10.48550/arxiv.2407.08388 preprint EN arXiv (Cornell University) 2024-07-11

“Moral imagination” is the capacity to register that one’s perspective on a decision-making situation limited, and imagine alternative perspectives reveal new considerations or approaches. We have developed Moral Imagination approach aims drive culture of responsible innovation, ethical awareness, deliberation, decision-making, commitment in organizations developing technologies. here present case study illustrates one key aspect our – technomoral scenario as we applied it work with product...

10.29173/irie527 article EN The International Review of Information Ethics 2024-10-19

Pleasure and pain play an important role in human decision making by providing a common currency for resolving motivational conflicts. While Large Language Models (LLMs) can generate detailed descriptions of pleasure experiences, it is open question whether LLMs recreate the force choice scenarios - which may bear on debates about LLM sentience, understood as capacity valenced experiential states. We probed this using simple game stated goal to maximise points, but where either...

10.48550/arxiv.2411.02432 preprint EN arXiv (Cornell University) 2024-11-01
Coming Soon ...