Knowledge Graph Prompting for Multi-Document Question Answering

FOS: Computer and information sciences Computer Science - Machine Learning Computer Science - Computation and Language Artificial Intelligence (cs.AI) Computer Science - Artificial Intelligence 05 social sciences 0501 psychology and cognitive sciences Computation and Language (cs.CL) Information Retrieval (cs.IR) Computer Science - Information Retrieval Machine Learning (cs.LG)
DOI: 10.1609/aaai.v38i17.29889 Publication Date: 2024-03-25T12:08:20Z
ABSTRACT
The `pre-train, prompt, predict' paradigm of large language models (LLMs) has achieved remarkable success in open-domain question answering (OD-QA). However, few works explore this multi-document (MD-QA), a task demanding thorough understanding the logical associations among contents and structures documents. To fill crucial gap, we propose Knowledge Graph Prompting (KGP) method to formulate right context prompting LLMs for MD-QA, which consists graph construction module traversal module. For construction, create knowledge (KG) over multiple documents with nodes symbolizing passages or document (e.g., pages/tables), edges denoting semantic/lexical similarity between structural relations. traversal, design an LLM-based agent that navigates across gathers supporting assisting MD-QA. constructed serves as global ruler regulates transitional space reduces retrieval latency. Concurrently, acts local navigator pertinent progressively approach guarantee quality. Extensive experiments underscore efficacy KGP signifying potential leveraging graphs enhancing prompt augmented generation LLMs. Our code: https://github.com/YuWVandy/KG-LLM-MDQA.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (40)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....