Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing

被引:0
|
作者
Zhou, Jiawei [1 ]
Naseem, Tahira [2 ]
Astudillo, Ramon Fernandez [2 ]
Lee, Young-Suk [2 ]
Florian, Radu [2 ]
Roukos, Salim [2 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] IBM Res, Armonk, NY USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Predicting linearized Abstract Meaning Representation (AMR) graphs using pre-trained sequence-to-sequence Transformer models has recently led to large improvements on AMR parsing benchmarks. These parsers are simple and avoid explicit modeling of structure but lack desirable properties such as graph well-formedness guarantees or built-in graph-sentence alignments. In this work we explore the integration of general pre-trained sequence-to-sequence language models and a structure-aware transition-based approach. We depart from a pointer-based transition system and propose a simplified transition set, designed to better exploit pre-trained language models for structured fine-tuning. We also explore modeling the parser state within the pre-trained encoder-decoder architecture and different vocabulary strategies for the same purpose. We provide a detailed comparison with recent progress in AMR parsing and show that the proposed parser retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new parsing state of the art for AMR 2.0, without the need for graph re-categorization.
引用
收藏
页码:6279 / 6290
页数:12
相关论文
共 8 条
  • [1] Fine-tuning pretrained transformer encoders for sequence-to-sequence learning
    Hangbo Bao
    Li Dong
    Wenhui Wang
    Nan Yang
    Songhao Piao
    Furu Wei
    International Journal of Machine Learning and Cybernetics, 2024, 15 : 1711 - 1728
  • [2] Fine-tuning pretrained transformer encoders for sequence-to-sequence learning
    Bao, Hangbo
    Dong, Li
    Wang, Wenhui
    Yang, Nan
    Piao, Songhao
    Wei, Furu
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (05) : 1711 - 1728
  • [3] Fine-Tuning Self-Supervised Multilingual Sequence-To-Sequence Models for Extremely Low-Resource NMT
    Thillainathan, Sarubi
    Ranathunga, Surangika
    Jayasena, Sanath
    MORATUWA ENGINEERING RESEARCH CONFERENCE (MERCON 2021) / 7TH INTERNATIONAL MULTIDISCIPLINARY ENGINEERING RESEARCH CONFERENCE, 2021, : 432 - 437
  • [4] Structure-Aware Low-Rank Adaptation for Parameter-Efficient Fine-Tuning
    Hu, Yahao
    Xie, Yifei
    Wang, Tianfeng
    Chen, Man
    Pan, Zhisong
    MATHEMATICS, 2023, 11 (20)
  • [5] Fine-tuning the nature of sequence-based disorder in ARF transcription regulation
    Scott, Adam
    Ellis, Jamie
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2016, 251
  • [6] StructSP: Efficient Fine-tuning of Task-Oriented Dialog System by Using Structure-aware Boosting and Grammar Constraints
    Do, Dinh-Truong
    Nguyen, Minh-Phuong
    Nguyen, Le-Minh
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 10206 - 10220
  • [7] A Sequence Fine-Tuning Strategy Based Evolutionary Algorithm for Solving Project Scheduling Problems
    Dong, Zhengxuan
    Shao, Shuai
    Tian, Ye
    2024 6TH INTERNATIONAL CONFERENCE ON DATA-DRIVEN OPTIMIZATION OF COMPLEX SYSTEMS, DOCS 2024, 2024, : 170 - 177
  • [8] D-SCRIPT translates genome to phenome with sequence-based, structure-aware, genome-scale predictions of protein-protein interactions
    Sledzieski, Samuel
    Singh, Rohit
    Cowen, Lenore
    Berger, Bonnie
    CELL SYSTEMS, 2021, 12 (10) : 969 - +