Hallucination Detection and Hallucination Mitigation: An Investigation

DOI: 10.48550/arxiv.2401.08358 Publication Date: 2024-01-01
ABSTRACT
Large language models (LLMs), including ChatGPT, Bard, and Llama, have achieved remarkable successes over the last two years in a range of different applications. In spite these successes, there exist concerns that limit wide application LLMs. A key problem is hallucination. Hallucination refers to fact addition correct responses, LLMs can also generate seemingly but factually incorrect responses. This report aims present comprehensive review current literature on both hallucination detection mitigation. We hope this serve as good reference for engineers researchers who are interested applying them real world tasks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....