Black-box Prompt Tuning with Subspace Learning
Black box
DOI:
10.48550/arxiv.2305.03518
Publication Date:
2023-01-01
AUTHORS (4)
ABSTRACT
Black-box prompt tuning employs derivative-free optimization algorithms to learn prompts within low-dimensional subspaces rather than back-propagating through the network of Large Language Models (LLMs). Recent studies reveal that black-box lacks versatility across tasks and LLMs, which we believe is related suboptimal choice subspaces. In this paper, introduce with Subspace Learning (BSL) enhance tuning. Based on assumption nearly optimal for similar reside in a common subspace, propose identifying such meta-learning collection source tasks. Consequently, target task shares similarities tasks, expect optimizing identified subspace can yield performs well task. Experimental results confirm our BSL framework consistently achieves competitive performance various downstream LLMs.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....