Investigating Table-to-Text Generation Capabilities of Large Language Models in Real-World Information Seeking Scenarios

Table (database) Code (set theory)
DOI: 10.18653/v1/2023.emnlp-industry.17 Publication Date: 2023-12-10T21:58:19Z
ABSTRACT
Tabular data is prevalent across various industries, necessitating significant time and effort for users to understand manipulate their information-seeking purposes. The advancements in large language models (LLMs) have shown enormous potential improve user efficiency. However, the adoption of LLMs real-world applications table information seeking remains underexplored. In this paper, we investigate table-to-text capabilities different using four datasets within two scenarios. These include LogicNLG our newly-constructed LoTNLG insight generation, along with FeTaQA F2WTQ query-based generation. We structure investigation around three research questions, evaluating performance automated evaluation, feedback respectively. Experimental results indicate that current high-performing LLM, specifically GPT-4, can effectively serve as a generator, evaluator, facilitating users’ purposes gap still exists between other open-sourced (e.g., Vicuna LLaMA-2) GPT-4 models. Our code are publicly available at https://github.com/yale-nlp/LLM-T2T.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (1)