CoSER: Coordinating LLM-Based Persona Simulation of Established Roles
Persona
DOI:
10.48550/arxiv.2502.09082
Publication Date:
2025-02-13
AUTHORS (12)
ABSTRACT
Role-playing language agents (RPLAs) have emerged as promising applications of large models (LLMs). However, simulating established characters presents a challenging task for RPLAs, due to the lack authentic character datasets and nuanced evaluation methods using such data. In this paper, we present CoSER, collection high-quality dataset, open models, an protocol towards effective RPLAs characters. The CoSER dataset covers 17,966 from 771 renowned books. It provides dialogues with real-world intricacies, well diverse data types conversation setups, experiences internal thoughts. Drawing acting methodology, introduce given-circumstance training evaluating role-playing LLMs, where LLMs sequentially portray multiple in book scenes. Using our develop 8B 70B, i.e., advanced built on LLaMA-3.1 models. Extensive experiments demonstrate value RPLA training, retrieval. Moreover, 70B exhibits state-of-the-art performance surpassing or matching GPT-4o three existing benchmarks, achieving 75.80% 93.47% accuracy InCharacter LifeChoice benchmarks respectively.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....