A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts

Human memory
DOI: 10.48550/arxiv.2402.09727 Publication Date: 2024-02-15
ABSTRACT
Current Large Language Models (LLMs) are not only limited to some maximum context length, but also able robustly consume long inputs. To address these limitations, we propose ReadAgent, an LLM agent system that increases effective length up 20x in our experiments. Inspired by how humans interactively read documents, implement ReadAgent as a simple prompting uses the advanced language capabilities of LLMs (1) decide what content store together memory episode, (2) compress those episodes into short episodic memories called gist memories, and (3) take actions look passages original text if needs remind itself relevant details complete task. We evaluate against baselines using retrieval methods, contexts, memories. These evaluations performed on three long-document reading comprehension tasks: QuALITY, NarrativeQA, QMSum. outperforms all tasks while extending window 3-20x.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....