A neural parser as a direct classifier for head-final languages

被引:0
作者
Kanayama, Hiroshi [1 ]
Muraoka, Masayasu [1 ]
Kohita, Ryosuke [1 ]
机构
[1] IBM Res, Tokyo, Japan
来源
RELEVANCE OF LINGUISTIC STRUCTURE IN NEURAL ARCHITECTURES FOR NLP | 2018年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper demonstrates a neural parser implementation suitable for consistently head-final languages such as Japanese. Unlike the transition- and graph-based algorithms in most state-of-the-art parsers, our parser directly selects the head word of a dependent from a limited number of candidates. This method drastically simplifies the model so that we can easily interpret the output of the neural model. Moreover, by exploiting grammatical knowledge to restrict possible modification types, we can control the output of the parser to reduce specific errors without adding annotated corpora. The neural parser performed well both on conventional Japanese corpora and the Japanese version of Universal Dependency corpus, and the advantages of distributed representations were observed in the comparison with the non-neural conventional model.
引用
收藏
页码:38 / 46
页数:9
相关论文
共 28 条
  • [1] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [2] Alonso HM, 2017, 15TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2017), VOL 1: LONG PAPERS, P230
  • [3] [Anonymous], P LREC
  • [4] [Anonymous], EDR JAP EL DICT RES
  • [5] [Anonymous], 2016, P LREC 2016
  • [6] [Anonymous], 2017, P CONLL 2017 SHAR TA
  • [7] [Anonymous], 2016, LREC
  • [8] [Anonymous], CONLL SHARED TASK
  • [9] [Anonymous], 2014, P EMNLP 2014 WORKSHO
  • [10] [Anonymous], 2017, P CONLL 2017 SHAR TA