LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for Parameter-Efficient Fine-Tuning
Rank (graph theory)
DOI:
10.48550/arxiv.2410.13618
Publication Date:
2024-10-17
AUTHORS (7)
ABSTRACT
The rapid growth of model scale has necessitated substantial computational resources for fine-tuning. Existing approach such as Low-Rank Adaptation (LoRA) sought to address the problem handling large updated parameters in full However, LoRA utilize random initialization and optimization low-rank matrices approximate weights, which can result suboptimal convergence an accuracy gap compared To these issues, we propose LoLDU, a Parameter-Efficient Fine-Tuning (PEFT) that significantly reduces trainable by 2600 times regular PEFT methods while maintaining comparable performance. LoLDU leverages Lower-Diag-Upper Decomposition (LDU) initialize faster orthogonality. We focus on optimizing diagonal matrix scaling transformations. best our knowledge, fewest among all approaches. conducted extensive experiments across 4 instruction-following datasets, 6 natural language understanding (NLU) 8 image classification generation datasets with multiple types (LLaMA2, RoBERTa, ViT, Stable Diffusion), providing comprehensive detailed analysis. Our open-source code be accessed at \href{https://github.com/SKDDJ/LoLDU}{https://github.com/SKDDJ/LoLDU}.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....