Incorporating Stylistic Lexical Preferences in Generative Language Models
Categorical variable
Language Understanding
Generative model
DOI:
10.18653/v1/2020.findings-emnlp.96
Publication Date:
2020-11-29T09:58:51Z
AUTHORS (3)
ABSTRACT
While recent advances in language modeling has resulted powerful generation models, their style remains implicitly dependent on the training data and can not emulate a specific target style. Leveraging generative capabilities of transformer-based we present an approach to induce certain target-author attributes by incorporating continuous multi-dimensional lexical preferences author into models. We introduce rewarding strategies reinforcement learning framework that encourages use words across multiple categorical dimensions, varying extents. Our experiments demonstrate proposed generate text distinctively aligns with given author’s conduct quantitative qualitative comparisons competitive relevant baselines illustrate benefits approach.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....