Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding
Margin (machine learning)
DOI:
10.18653/v1/2022.emnlp-main.455
Publication Date:
2023-08-04T20:21:02Z
AUTHORS (4)
ABSTRACT
Prompt Tuning has been largely successful as a parameter-efficient method of conditioning large-scale pre-trained language models to perform downstream tasks. Thus far, soft prompt tuning learns fixed set task-specific continuous vectors, i.e., tokens that remain static across the task samples. A prompt, however, may not generalize well diverse kinds inputs comprises. In order address this, we propose Vector-quantized Input-contextualized Prompts (VIP) an extension framework. VIP particularly focuses on two aspects—contextual prompts input-specific contextualization through small-scale sentence encoder and quantized maps contextualized learnable codebook vectors Vector quantization network. On various understanding tasks like SuperGLUE, QA, Relation classification, NER NLI, outperforms (PT) baseline by average margin 1.19%. Further, our generalization studies show more robust representations, surpassing PT 0.6% - 5.3% Out-of-domain QA NLI respectively, 0.75% Multi-Task setup over 4 spanning 12 domains.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (2)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....