XPrompt: Exploring the Extreme of Prompt Tuning

Granularity Pruning Fine-tuning Bridge (graph theory)
DOI: 10.18653/v1/2022.emnlp-main.758 Publication Date: 2023-08-04T20:21:02Z
ABSTRACT
Prompt tuning learns soft prompts to condition the frozen Pre-trained Language Models (PLMs) for performing downstream tasks in a parameter-efficient manner. While prompt has gradually reached performance level of fine-tuning as model scale increases, there is still large gap between and models moderate small scales (typically less than 11B parameters). In this paper, we empirically show that trained tokens can have negative impact on task thus degrade its performance. To bridge gap, propose novel with an eXtremely (XPrompt) under regime lottery tickets hypothesis. Specifically, XPrompt eliminates at different granularity levels through hierarchical structured pruning, yielding more yet competitive Comprehensive experiments are carried out SuperGLUE tasks, results indicate able close smaller scales.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (12)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....