Large Language Models in Software Security: A Survey of Vulnerability Detection Techniques and Insights
Vulnerability
Vulnerability management
DOI:
10.48550/arxiv.2502.07049
Publication Date:
2025-02-10
AUTHORS (6)
ABSTRACT
Large Language Models (LLMs) are emerging as transformative tools for software vulnerability detection, addressing critical challenges in the security domain. Traditional methods, such static and dynamic analysis, often falter due to inefficiencies, high false positive rates, growing complexity of modern systems. By leveraging their ability analyze code structures, identify patterns, generate repair sugges- tions, LLMs, exemplified by models like GPT, BERT, CodeBERT, present a novel scalable approach mitigating vulnerabilities. This paper provides detailed survey LLMs detection. It examines key aspects, including model architectures, application target languages, fine-tuning strategies, datasets, evaluation metrics. We also scope current research problems, highlighting strengths weaknesses existing approaches. Further, we address cross-language multimodal data integration, repository-level analysis. Based on these findings, propose solutions issues dataset scalability, interpretability, applications low-resource scenarios. Our contributions threefold: (1) systematic review how applied detection; (2) an analysis shared patterns differences across studies, with unified framework understanding field; (3) summary future directions. work valuable insights advancing LLM-based maintain regularly update latest selected https://github.com/OwenSanzas/LLM-For-Vulnerability-Detection
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....