Reliable Natural Language Understanding with Large Language Models and Answer Set Programming
Answer Set Programming
Natural language programming
Programming language specification
DOI:
10.4204/eptcs.385.27
Publication Date:
2023-08-28T20:49:53Z
AUTHORS (4)
ABSTRACT
Humans understand language by extracting information (meaning) from sentences, combining it with existing commonsense knowledge, and then performing reasoning to draw conclusions.While large models (LLMs) such as GPT-3 ChatGPT are able leverage patterns in the text solve a variety of NLP tasks, they fall short problems that require reasoning.They also cannot reliably explain answers generated for given question.In order emulate humans better, we propose STAR, framework combines LLMs Answer Set Programming (ASP).We show how can be used effectively extract knowledge-represented predicates-from language.Goal-directed ASP is employed reason over this knowledge.We apply STAR three different NLU tasks requiring reasoning: qualitative reasoning, mathematical goal-directed conversation.Our experiments reveal bridge gap leading significant performance improvements, especially smaller LLMs, i.e., number parameters.NLU applications developed using explainable: along predicates generated, justification form proof tree produced output.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (5)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....