Neural AMR: Sequence-to-Sequence Models for Parsing and Generation

被引:111
|
作者
Konstas, Ioannis [1 ]
Iyer, Srinivasan [1 ]
Yatskar, Mark [1 ]
Choi, Yejin [1 ]
Zettlemoyer, Luke [1 ,2 ]
机构
[1] Univ Washington, Paul G Allen Sch Comp Sci & Engn, Seattle, WA 98195 USA
[2] Allen Inst Artificial Intelligence, Seattle, WA USA
基金
美国国家科学基金会;
关键词
D O I
10.18653/v1/P17-1014
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the non-sequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive results of 62.1 SMATCH, the current best score reported without significant use of external semantic resources. For AMR generation, our model establishes a new state-of-the-art performance of BLEU 33.8. We present extensive ablative and qualitative analysis including strong evidence that sequence-based AMR models are robust against ordering variations of graph-to-sequence conversions.
引用
收藏
页码:146 / 157
页数:12
相关论文
共 50 条
  • [31] Deep Reinforcement Learning for Sequence-to-Sequence Models
    Keneshloo, Yaser
    Shi, Tian
    Ramakrishnan, Naren
    Reddy, Chandan K.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (07) : 2469 - 2489
  • [32] Multilingual Sequence-to-Sequence Models for Hebrew NLP
    Eyal, Matan
    Noga, Hila
    Aharoni, Roee
    Szpektor, Idan
    Tsarfaty, Reut
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 7700 - 7708
  • [33] Sequence-to-Sequence Models for Automated Text Simplification
    Botarleanu, Robert-Mihai
    Dascalu, Mihai
    Crossley, Scott Andrew
    McNamara, Danielle S.
    ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2020), PT II, 2020, 12164 : 31 - 36
  • [34] Sequence-to-Sequence Models for Emphasis Speech Translation
    Quoc Truong Do
    Sakti, Sakriani
    Nakamura, Satoshi
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2018, 26 (10) : 1873 - 1883
  • [35] On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models
    Michel, Paul
    Li, Xian
    Neubig, Graham
    Pino, Juan Miguel
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 3103 - 3114
  • [36] A Comparison of Sequence-to-Sequence Models for Speech Recognition
    Prabhavalkar, Rohit
    Rao, Kanishka
    Sainath, Tara N.
    Li, Bo
    Johnson, Leif
    Jaitly, Navdeep
    18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 939 - 943
  • [37] Learning Damage Representations with Sequence-to-Sequence Models
    Yang, Qun
    Shen, Dejian
    SENSORS, 2022, 22 (02)
  • [38] On Sparsifying Encoder Outputs in Sequence-to-Sequence Models
    Zhang, Biao
    Titov, Ivan
    Sennrich, Rico
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2888 - 2900
  • [39] Sequence-to-sequence Models for Cache Transition Systems
    Peng, Xiaochang
    Song, Linfeng
    Gildea, Daniel
    Satta, Giorgio
    PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, : 1842 - 1852
  • [40] Context Dependent Trajectory Generation using Sequence-to-Sequence Models for Robotic Toilet Cleaning
    Yang, Pin-Chu
    Koganti, Nishanth
    Ricardez, Gustavo Alfonso Garcia
    Yamamoto, Masaki
    Takamatsu, Jun
    Ogasawara, Tsukasa
    2020 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2020, : 932 - 937