Explainable natural language processing with matrix product states

被引:3
|
作者
Tangpanitanon, Jirawat [1 ,2 ]
Mangkang, Chanatip [3 ]
Bhadola, Pradeep [4 ]
Minato, Yuichiro [5 ]
Angelakis, Dimitris G. [6 ,7 ]
Chotibut, Thiparat [3 ]
机构
[1] Quantum Technol Fdn Thailand, Bangkok, Thailand
[2] Minist Higher Educ Sci Res & Innovat, Thailand Ctr Excellence Phys, Bangkok, Thailand
[3] Chulalongkorn Univ, Fac Sci, Dept Phys, Chula Intelligent & Complex Syst, Bangkok, Thailand
[4] Mahidol Univ, Ctr Theoret Phys & Nat Philosophy, Nakhonsawan Studiorum Adv Studies, Nakhonsawan Campus, Khao Thong, Thailand
[5] Blueqat Inc, Tokyo, Japan
[6] Tech Univ Crete, Sch Elect & Comp Engn, Khania, Greece
[7] Natl Univ Singapore, Ctr Quantum Technol, Singapore, Singapore
来源
NEW JOURNAL OF PHYSICS | 2022年 / 24卷 / 05期
关键词
matrix product state; entanglement entropy; entanglement spectrum; quantum machine learning; natural language processing; recurrent neural networks; TENSOR NETWORKS; QUANTUM;
D O I
10.1088/1367-2630/ac6232
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Despite empirical successes of recurrent neural networks (RNNs) in natural language processing (NLP), theoretical understanding of RNNs is still limited due to intrinsically complex non-linear computations. We systematically analyze RNNs' behaviors in a ubiquitous NLP task, the sentiment analysis of movie reviews, via the mapping between a class of RNNs called recurrent arithmetic circuits (RACs) and a matrix product state. Using the von-Neumann entanglement entropy (EE) as a proxy for information propagation, we show that single-layer RACs possess a maximum information propagation capacity, reflected by the saturation of the EE. Enlarging the bond dimension beyond the EE saturation threshold does not increase model prediction accuracies, so a minimal model that best estimates the data statistics can be inferred. Although the saturated EE is smaller than the maximum EE allowed by the area law, our minimal model still achieves similar to 99% training accuracies in realistic sentiment analysis data sets. Thus, low EE is not a warrant against the adoption of single-layer RACs for NLP. Contrary to a common belief that long-range information propagation is the main source of RNNs' successes, we show that single-layer RACs harness high expressiveness from the subtle interplay between the information propagation and the word vector embeddings. Our work sheds light on the phenomenology of learning in RACs, and more generally on the explainability of RNNs for NLP, using tools from many-body quantum physics.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Techniques Comparison for Natural Language Processing
    Iosifova, Olena
    Iosifov, Ievgen
    Rolik, Oleksandr
    Sokolov, Volodymyr
    MOMLET+DS 2020: MODERN MACHINE LEARNING TECHNOLOGIES AND DATA SCIENCE WORKSHOP, 2020, 2631
  • [42] Natural language processing in the web era
    Basili, Roberto
    Magnini, Bernardo
    INTELLIGENZA ARTIFICIALE, 2012, 6 (02) : 117 - 119
  • [43] Fraud detection with natural language processing
    Boulieris, Petros
    Pavlopoulos, John
    Xenos, Alexandros
    Vassalos, Vasilis
    MACHINE LEARNING, 2024, 113 (08) : 5087 - 5108
  • [44] Upon Dynamic Natural Language Processing
    Crisan, Marius
    2009 INTERNATIONAL CONFERENCE ON NEW TRENDS IN INFORMATION AND SERVICE SCIENCE (NISS 2009), VOLS 1 AND 2, 2009, : 487 - 492
  • [45] Paradigm Shift in Natural Language Processing
    Sun, Tian-Xiang
    Liu, Xiang-Yang
    Qiu, Xi-Peng
    Huang, Xuan-Jing
    MACHINE INTELLIGENCE RESEARCH, 2022, 19 (03) : 169 - 183
  • [46] Paradigm Shift in Natural Language Processing
    Tian-Xiang Sun
    Xiang-Yang Liu
    Xi-Peng Qiu
    Xuan-Jing Huang
    Machine Intelligence Research, 2022, 19 : 169 - 183
  • [47] Natural language processing: a prolog perspective
    Christian Bitter
    David A. Elizondo
    Yingjie Yang
    Artificial Intelligence Review, 2010, 33 : 151 - 173
  • [48] Computing with Words in Natural Language Processing
    Huseynova, Farida
    10TH INTERNATIONAL CONFERENCE ON THEORY AND APPLICATION OF SOFT COMPUTING, COMPUTING WITH WORDS AND PERCEPTIONS - ICSCCW-2019, 2020, 1095 : 621 - 625
  • [49] The journey from natural language processing to large language models: key insights for radiologists
    Salvatore Claudio Fanni
    Lorenzo Tumminello
    Valentina Formica
    Francesca Pia Caputo
    Gayane Aghakhanyan
    Ilaria Ambrosini
    Roberto Francischello
    Lorenzo Faggioni
    Dania Cioni
    Emanuele Neri
    Journal of Medical Imaging and Interventional Radiology, 11 (1):
  • [50] Natural language processing and intelligent interfaces
    Oliviero Stock
    Annals of Mathematics and Artificial Intelligence, 2000, 28 : 39 - 41