Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering
Knowledge graph
Closed-ended question
DOI:
10.48550/arxiv.2402.09911
Publication Date:
2024-02-15
AUTHORS (5)
ABSTRACT
Mitigating the hallucinations of Large Language Models (LLMs) and enhancing them is a crucial task. Although some existing methods employ model self-enhancement techniques, they fall short effectively addressing unknown factual hallucinations. Using Knowledge Graph (KG) enhancement approaches fails to address generalization across different KG sources open-ended answer questions simultaneously. To tackle these limitations, there framework that combines Pseudo-Graph Generation Atomic Verification proposed. The LLM using in an question-answering setting implemented by leveraging Generation. utilizes atomic-level knowledge querying verification achieve generalizability under sources. Compared baseline, this approach yields minimum improvement 11.5 ROUGE-L score for questions. For precise questions, we observe accuracy 7.5. Moreover, also demonstration exhibits In summary, our results pave way LLMs incorporating Pseudo- Multisource-KGs, particularly context
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....