Unifying Structure Reasoning and Language Pre-Training for Complex Reasoning Tasks

被引:2
|
作者
Wang, Siyuan [1 ]
Wei, Zhongyu [1 ,2 ]
Xu, Jiarong [3 ]
Li, Taishan [4 ]
Fan, Zhihao [1 ]
机构
[1] Fudan Univ, Sch Data Sci, Shanghai 200433, Peoples R China
[2] Fudan Univ, Res Inst Intelligent & Complex Syst, Shanghai 200433, Peoples R China
[3] Fudan Univ, Sch Management, Shanghai 200433, Peoples R China
[4] ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples R China
基金
中国国家自然科学基金;
关键词
Cognition; Task analysis; Semantics; Films; Speech processing; Context modeling; Data models; Structure reasoning skill; language model pre-training; complex reasoning;
D O I
10.1109/TASLP.2023.3325973
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Recent pre-trained language models (PLMs) equipped with foundation reasoning skills have shown remarkable performance on downstream complex tasks. However, the significant structure reasoning skill has been rarely studied, which involves modeling implicit structure information within the text and performing explicit logical reasoning over them to deduce the conclusion. This paper proposes a unified learning framework that combines explicit structure reasoning and language pre-training to endow PLMs with the structure reasoning skill. It first identifies several elementary structures within contexts to construct structured queries and performs step-by-step reasoning along the queries to identify the answer entity. The fusion of textual semantics and structure reasoning is achieved by using contextual representations learned by PLMs to initialize the representation space of structures, and performing stepwise reasoning on this semantic representation space. Experimental results on four datasets demonstrate that the proposed model achieves significant improvements in complex reasoning tasks involving diverse structures, and shows transferability to downstream tasks with limited training data and effectiveness for complex reasoning of KGs modality.
引用
收藏
页码:1586 / 1595
页数:10
相关论文
共 23 条
  • [1] To Boost Zero-Shot Generalization for Embodied Reasoning With Vision-Language Pre-Training
    Su, Ke
    Zhang, Xingxing
    Zhang, Siyang
    Zhu, Jun
    Zhang, Bo
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 5370 - 5381
  • [2] Knowledge Graph as Pre-Training Corpus for Structural Reasoning via Multi-Hop Linearization
    Kim, Wooyoung
    Jung, Haemin
    Kim, Wooju
    IEEE ACCESS, 2025, 13 : 7273 - 7283
  • [3] Lightweight Model Pre-Training via Language Guided Knowledge Distillation
    Li, Mingsheng
    Zhang, Lin
    Zhu, Mingzhen
    Huang, Zilong
    Yu, Gang
    Fan, Jiayuan
    Chen, Tao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 10720 - 10730
  • [4] Training Complements for Belief Reasoning in Developmental Language Disorder
    Durrleman, Stephanie
    Delage, Helene
    JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2020, 63 (06): : 1861 - 1877
  • [5] Simultaneously Training and Compressing Vision-and-Language Pre-Training Model
    Qi, Qiaosong
    Zhang, Aixi
    Liao, Yue
    Sun, Wenyu
    Wang, Yongliang
    Li, Xiaobo
    Liu, Si
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8194 - 8203
  • [6] Focus and Align: Learning Tube Tokens for Video-Language Pre-Training
    Zhu, Yongqing
    Li, Xiangyang
    Zheng, Mao
    Yang, Jiahao
    Wang, Zihan
    Guo, Xiaoqian
    Chai, Zifeng
    Yuan, Yuchen
    Jiang, Shuqiang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8036 - 8050
  • [7] IMITATE: Clinical Prior Guided Hierarchical Vision-Language Pre-Training
    Liu, Che
    Cheng, Sibo
    Shi, Miaojing
    Shah, Anand
    Bai, Wenjia
    Arcucci, Rossella
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2025, 44 (01) : 519 - 529
  • [8] Corruption Is Not All Bad: Incorporating Discourse Structure Into Pre-Training via Corruption for Essay Scoring
    Mim, Farjana Sultana
    Inoue, Naoya
    Reisert, Paul
    Ouchi, Hiroki
    Inui, Kentaro
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 2202 - 2215
  • [9] Gradual Syntactic Label Replacement for Language Model Pre-Training
    Wang, Yile
    Zhang, Yue
    Li, Peng
    Liu, Yang
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 486 - 496
  • [10] Complex meaning and broad reasoning: some insights on Philosophy of Language
    da Silva Penz, Yuri Fernando
    Tramunt Ibanos, Ana Maria
    LETRAS DE HOJE-ESTUDOS E DEBATES EM LINGUISTICA LITERATURA E LINGUA PORTUGUESA, 2020, 55 (03): : 366 - 377