Unsupervised Text Style Transfer via LLMs and Attention Masking with Multi-way Interactions
DOI:
10.48550/arxiv.2402.13647
Publication Date:
2024-02-21
AUTHORS (4)
ABSTRACT
Unsupervised Text Style Transfer (UTST) has emerged as a critical task within the domain of Natural Language Processing (NLP), aiming to transfer one stylistic aspect sentence into another style without changing its semantics, syntax, or other attributes. This is especially challenging given intrinsic lack parallel text pairings. Among existing methods for UTST tasks, attention masking approach and Large Models (LLMs) are deemed two pioneering methods. However, they have shortcomings in generating unsmooth sentences original contents, respectively. In this paper, we investigate if can combine these effectively. We propose four ways interactions, that pipeline framework with tuned orders; knowledge distillation from LLMs model; in-context learning constructed examples. empirically show multi-way interactions improve baselines certain perspective strength, content preservation fluency. Experiments also demonstrate simply conducting prompting followed by masking-based revision consistently surpass systems, including supervised systems. On Yelp-clean Amazon-clean datasets, it improves previously best mean metric 0.5 3.0 absolute percentages respectively, achieves new SOTA results.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....