Table Meets LLM: Can Large Language Models Understand Structured Table Data? A Benchmark and Empirical Study

被引:15
|
作者
Sui, Yuan [1 ,4 ]
Zhou, Mengyu [2 ]
Zhou, Mingjie [3 ,4 ]
Han, Shi [2 ]
Zhang, Dongmei [2 ]
机构
[1] Natl Univ Singapore, Singapore, Singapore
[2] Microsoft, Beijing, Peoples R China
[3] Univ Hong Kong, Hong Kong, Peoples R China
[4] Microsoft Res Asia, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024 | 2024年
关键词
large language models; semi-structured data; structural understanding capabilities; benchmark;
D O I
10.1145/3616855.3635752
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) are becoming attractive as few-shot reasoners to solve Natural Language (NL)-related tasks. However, there is still much to learn about how well LLMs understand structured data, such as tables. Although tables can be used as input to LLMs with serialization, there is a lack of comprehensive studies that examine whether LLMs can truly comprehend such data. In this paper, we try to understand this by designing a benchmark to evaluate the structural understanding capabilities (SUC) of LLMs. The benchmark we create includes seven tasks, each with its own unique challenges, e.g., cell lookup, row retrieval, and size detection. We perform a series of evaluations on GPT-3.5 and GPT-4. We find that performance varied depending on several input choices, including table input format, content order, role prompting, and partition marks. Drawing from the insights gained through the benchmark evaluations, we propose self-augmentation for effective structural prompting, such as critical value / range identification using internal knowledge of LLMs. When combined with carefully chosen input choices, these structural prompting methods lead to promising improvements in LLM performance on a variety of tabular tasks, e.g., TabFact(. 2.31%), HybridQA(. 2.13%), SQA(. 2.72%), Feverous(. 0.84%), and ToTTo(. 5.68%). We believe that our open source1 benchmark and proposed prompting methods can serve as a simple yet generic selection for future research.
引用
收藏
页码:645 / 654
页数:10
相关论文
共 34 条
  • [31] Large Language Models May Help Patients Understand Peer-Reviewed Scientific Articles About Ophthalmology: Development and Usability Study
    Kianian, Reza
    Sun, Deyu
    Rojas-Carabali, William
    Agrawal, Rupesh
    Tsui, Edmund
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26
  • [32] Enhancing pharmacogenomic data accessibility and drug safety with large language models: a case study with Llama3.1
    Li, Dan
    Wu, Leihong
    Lin, Ying-Chi
    Huang, Ho-Yin
    Cotton, Ebony
    Liu, Qi
    Chen, Ru
    Huang, Ruihao
    Zhang, Yifan
    Xu, Joshua
    EXPERIMENTAL BIOLOGY AND MEDICINE, 2024, 249
  • [33] Leveraging Open-Source Large Language Models for Data Augmentation in Hospital Staff Surveys: Mixed Methods Study
    Ehrett, Carl
    Hegde, Sudeep
    Andre, Kwame
    Liu, Dixizi
    Wilson, Timothy
    JMIR MEDICAL EDUCATION, 2024, 10
  • [34] Harnessing Large Language Models for Structured Reporting in Breast Ultrasound: A Comparative Study of Open AI (GPT-4.0) and Microsoft Bing (GPT-4)
    Liu, ChaoXu
    Wei, MinYan
    Qin, Yu
    Zhang, MeiXiang
    Jiang, Huan
    Xu, JiaLe
    Zhang, YuNing
    Hua, Qing
    Hou, YiQing
    Dong, YiJie
    Xia, ShuJun
    Li, Ning
    Zhou, JianQiao
    ULTRASOUND IN MEDICINE AND BIOLOGY, 2024, 50 (11) : 1697 - 1703