SSMLoRA: Enhancing Low-Rank Adaptation with State Space Model

Rank (graph theory)
DOI: 10.48550/arxiv.2502.04958 Publication Date: 2025-02-07
ABSTRACT
Fine-tuning is a key approach for adapting language models to specific downstream tasks, but updating all model parameters becomes impractical as sizes increase. Parameter-Efficient Fine-Tuning (PEFT) methods, such Low-Rank Adaptation (LoRA), address this challenge by introducing additional adaptation into pre-trained weight matrices. However, LoRA's performance varies across different insertion points within the model, highlighting potential parameter inefficiency due unnecessary insertions. To end, we propose SSMLoRA (State Space Model Adaptation), an extension of LoRA that incorporates State (SSM) interconnect low-rank ensures maintained even with sparser allows not only map inputs space better feature extraction also leverage computations from previous space. Our method achieves comparable on General Language Understanding Evaluation (GLUE) benchmark while using half parameters. Additionally, its structure, shows promise in handling tasks longer input sequences. .You can find our code here:https://github.com/yuhkalhic/SSMLoRA.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....