Large Language Models Amplify Human Biases in Moral Decision-Making

DOI: 10.31234/osf.io/aj46b_v1 Publication Date: 2025-04-11T10:59:14Z
ABSTRACT
As large language models (LLMs) become more widely used, people increasingly rely on them to make or advise moral decisions. Some researchers even propose using LLMs as participants in psychology experiments. It is, therefore, important understand how well decisions and they compare humans. We investigated these questions by asking a range of emulate people's realistic dilemmas. In Study 1, we compared responses from representative U.S. sample (N = 285) for 22 dilemmas: collective action problems that pitted self-interest against the greater good, dilemmas utilitarian cost-benefit reasoning deontological rules. problems, were altruistic than participants. dilemmas, exhibited stronger omission bias participants: usually endorsed inaction over action. 2 490, preregistered), replicated this documented an additional bias: unlike humans, most biased toward answering "no" thus flipping their decision/advice depending question is worded. 3 (N=493, biases everyday adapted forum posts Reddit. 4, sources comparing with without fine-tuning, showing likely arise fine-tuning chatbot applications. Our findings suggest LLMs' advice amplify human introduce novel potentially problematic biases.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....