Injecting Domain-Specific Knowledge into Large Language Models: A Comprehensive Survey
FOS: Computer and information sciences
Computer Science - Computation and Language
Computation and Language (cs.CL)
DOI:
10.48550/arxiv.2502.10708
Publication Date:
2025-02-15
AUTHORS (7)
ABSTRACT
Large Language Models (LLMs) have demonstrated remarkable success in various tasks such as natural language understanding, text summarization, and machine translation. However, their general-purpose nature often limits effectiveness domain-specific applications that require specialized knowledge, healthcare, chemistry, or legal analysis. To address this, researchers explored diverse methods to enhance LLMs by integrating knowledge. In this survey, we provide a comprehensive overview of these methods, which categorize into four key approaches: dynamic knowledge injection, static embedding, modular adapters, prompt optimization. Each approach offers unique mechanisms equip with domain expertise, balancing trade-offs between flexibility, scalability, efficiency. We discuss how enable tackle tasks, compare advantages disadvantages, evaluate against general LLMs, highlight the challenges opportunities emerging field. For those interested delving deeper area, also summarize commonly used datasets benchmarks. keep updated on latest studies, maintain an open-source at: https://github.com/abilliyb/Knowledge_Injection_Survey_Papers, dedicated documenting research field LLM.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....