LAMPAT: Low-Rank Adaption for Multilingual Paraphrasing Using Adversarial Training
Rank (graph theory)
Training set
DOI:
10.1609/aaai.v38i16.29804
Publication Date:
2024-03-25T11:54:35Z
AUTHORS (4)
ABSTRACT
Paraphrases are texts that convey the same meaning while using different words or sentence structures. It can be used as an automatic data augmentation tool for many Natural Language Processing tasks, especially when dealing with low-resource languages, where shortage is a significant problem. To generate paraphrase in multilingual settings, previous studies have leveraged knowledge from machine translation field, i.e., forming through zero-shot language. Despite good performance on human evaluation, those methods still require parallel datasets, thus making them inapplicable to languages do not corpora. mitigate problem, we proposed first unsupervised paraphrasing model, LAMPAT (Low-rank Adaptation Multilingual Paraphrasing Adversarial Training), by which monolingual dataset sufficient enough human-like and diverse sentence. Throughout experiments, found out our method only works well English but generalize unseen well. Data code available at https://github.com/phkhanhtrinh23/LAMPAT.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (3)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....