Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning
Matrix (chemical analysis)
Rank (graph theory)
Transformation matrix
DOI:
10.48550/arxiv.2403.07440
Publication Date:
2024-03-12
AUTHORS (3)
ABSTRACT
Fine-tuning techniques based on Large Pretrained Language Models (LPLMs) have been proven to significantly enhance model performance a variety of downstream tasks and effectively control the output behaviors LPLMs. Recent studies proposed numerous methods for fine-tuning small number parameters open-source LPLMs, reducing demand computational storage resources. Among these, reparameterization represented by LoRA (Low-Rank Adaptation) gained popularity. We find that although these perform well in many aspects, there is still considerable room improvement terms complex task adaptability, performance, stability, algorithm complexity. In response this, inspired idea functions brain are shaped its geometric structure, this paper integrates into technology proposes new matrix transformation-based method efficient fine-tuning, named Matrix-Transformation Low-Rank Adaptation (MTLoRA). MTLoRA aims dynamically alter spatial structure applying transformation-matrix T linear transformations, such as rotation, scaling, translation, task-specific parameter matrix, generating feature patterns (eigenvectors) mimic fundamental influence functions, thereby enhancing model's tasks. Natural Understanding (NLU) tasks, it evaluated using GLUE benchmark test, results reveal achieves an overall increase about 1.0% across eight tasks; Generation (NLG) improves average 0.95% 0.31% DART WebNLG respectively.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....