Unifying Structure Reasoning and Language Pre-Training for Complex Reasoning Tasks

被引:2
作者
Wang, Siyuan [1 ]
Wei, Zhongyu [1 ,2 ]
Xu, Jiarong [3 ]
Li, Taishan [4 ]
Fan, Zhihao [1 ]
机构
[1] Fudan Univ, Sch Data Sci, Shanghai 200433, Peoples R China
[2] Fudan Univ, Res Inst Intelligent & Complex Syst, Shanghai 200433, Peoples R China
[3] Fudan Univ, Sch Management, Shanghai 200433, Peoples R China
[4] ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples R China
基金
中国国家自然科学基金;
关键词
Cognition; Task analysis; Semantics; Films; Speech processing; Context modeling; Data models; Structure reasoning skill; language model pre-training; complex reasoning;
D O I
10.1109/TASLP.2023.3325973
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Recent pre-trained language models (PLMs) equipped with foundation reasoning skills have shown remarkable performance on downstream complex tasks. However, the significant structure reasoning skill has been rarely studied, which involves modeling implicit structure information within the text and performing explicit logical reasoning over them to deduce the conclusion. This paper proposes a unified learning framework that combines explicit structure reasoning and language pre-training to endow PLMs with the structure reasoning skill. It first identifies several elementary structures within contexts to construct structured queries and performs step-by-step reasoning along the queries to identify the answer entity. The fusion of textual semantics and structure reasoning is achieved by using contextual representations learned by PLMs to initialize the representation space of structures, and performing stepwise reasoning on this semantic representation space. Experimental results on four datasets demonstrate that the proposed model achieves significant improvements in complex reasoning tasks involving diverse structures, and shows transferability to downstream tasks with limited training data and effectiveness for complex reasoning of KGs modality.
引用
收藏
页码:1586 / 1595
页数:10
相关论文
共 23 条
  • [21] HOP plus : History-Enhanced and Order-Aware Pre-Training for Vision-and-Language Navigation
    Qiao, Yanyuan
    Qi, Yuankai
    Hong, Yicong
    Yu, Zheng
    Wang, Peng
    Wu, Qi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 8524 - 8537
  • [22] Enhancement of cognitive and neural functions through complex reasoning training: evidence from normal and clinical populations
    Chapman, Sandra B.
    Mudar, Raksha A.
    FRONTIERS IN SYSTEMS NEUROSCIENCE, 2014, 8
  • [23] Reasoning of abstract motion of a target object through task order with natural language - pre-knowledge of object-handling-task programming for a service robot
    Katsuki, Rie
    Siegwart, Roland
    Ota, Jun
    Arai, Tamio
    ADVANCED ROBOTICS, 2006, 20 (04) : 391 - 412