On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial

FOS: Computer and information sciences Computer Science - Computers and Society Computers and Society (cs.CY)
DOI: 10.48550/arxiv.2403.14380 Publication Date: 2024-03-21
ABSTRACT
The development and popularization of large language models (LLMs) have raised concerns that they will be used to create tailor-made, convincing arguments push false or misleading narratives online. Early work has found can generate content perceived as at least on par often more persuasive than human-written messages. However, there is still limited knowledge about LLMs' capabilities in direct conversations with human counterparts how personalization improve their performance. In this pre-registered study, we analyze the effect AI-driven persuasion a controlled, harmless setting. We web-based platform where participants engage short, multiple-round debates live opponent. Each participant randomly assigned one four treatment conditions, corresponding two-by-two factorial design: (1) Games are either played between two humans an LLM; (2) Personalization might not enabled, granting players access basic sociodemographic information who debated GPT-4 personal had 81.7% (p < 0.01; N=820 unique participants) higher odds increased agreement opponents compared humans. Without personalization, outperforms humans, but lower statistically non-significant (p=0.31). Overall, our results suggest around meaningful important implications for governance social media design new online environments.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....