Explainable natural language processing with matrix product states

被引:3
|
作者
Tangpanitanon, Jirawat [1 ,2 ]
Mangkang, Chanatip [3 ]
Bhadola, Pradeep [4 ]
Minato, Yuichiro [5 ]
Angelakis, Dimitris G. [6 ,7 ]
Chotibut, Thiparat [3 ]
机构
[1] Quantum Technol Fdn Thailand, Bangkok, Thailand
[2] Minist Higher Educ Sci Res & Innovat, Thailand Ctr Excellence Phys, Bangkok, Thailand
[3] Chulalongkorn Univ, Fac Sci, Dept Phys, Chula Intelligent & Complex Syst, Bangkok, Thailand
[4] Mahidol Univ, Ctr Theoret Phys & Nat Philosophy, Nakhonsawan Studiorum Adv Studies, Nakhonsawan Campus, Khao Thong, Thailand
[5] Blueqat Inc, Tokyo, Japan
[6] Tech Univ Crete, Sch Elect & Comp Engn, Khania, Greece
[7] Natl Univ Singapore, Ctr Quantum Technol, Singapore, Singapore
来源
NEW JOURNAL OF PHYSICS | 2022年 / 24卷 / 05期
关键词
matrix product state; entanglement entropy; entanglement spectrum; quantum machine learning; natural language processing; recurrent neural networks; TENSOR NETWORKS; QUANTUM;
D O I
10.1088/1367-2630/ac6232
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Despite empirical successes of recurrent neural networks (RNNs) in natural language processing (NLP), theoretical understanding of RNNs is still limited due to intrinsically complex non-linear computations. We systematically analyze RNNs' behaviors in a ubiquitous NLP task, the sentiment analysis of movie reviews, via the mapping between a class of RNNs called recurrent arithmetic circuits (RACs) and a matrix product state. Using the von-Neumann entanglement entropy (EE) as a proxy for information propagation, we show that single-layer RACs possess a maximum information propagation capacity, reflected by the saturation of the EE. Enlarging the bond dimension beyond the EE saturation threshold does not increase model prediction accuracies, so a minimal model that best estimates the data statistics can be inferred. Although the saturated EE is smaller than the maximum EE allowed by the area law, our minimal model still achieves similar to 99% training accuracies in realistic sentiment analysis data sets. Thus, low EE is not a warrant against the adoption of single-layer RACs for NLP. Contrary to a common belief that long-range information propagation is the main source of RNNs' successes, we show that single-layer RACs harness high expressiveness from the subtle interplay between the information propagation and the word vector embeddings. Our work sheds light on the phenomenology of learning in RACs, and more generally on the explainability of RNNs for NLP, using tools from many-body quantum physics.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Local Interpretations for Explainable Natural Language Processing: A Survey
    Luo, Siwen
    Ivison, Hamish
    Han, Soyeon Caren
    Poon, Josiah
    ACM COMPUTING SURVEYS, 2024, 56 (09)
  • [2] Summarization of Customer Reviews for a Product on a website using Natural Language Processing
    Hanni, Akkamahadevi R.
    Patil, Mayur M.
    Patil, Priyadarshini M.
    2016 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATIONS AND INFORMATICS (ICACCI), 2016, : 2280 - 2285
  • [3] Explainable Prediction of Machine-Tool Breakdowns Based on Combination of Natural Language Processing and Classifiers
    Ben Ayed, Maha
    Soualhi, Moncef
    Mairot, Nicolas
    Giampiccolo, Sylvain
    Ketata, Raouf
    Zerhouni, Noureddine
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 4, INTELLISYS 2023, 2024, 825 : 105 - 121
  • [4] Network traffic anomalies, natural language processing, and random matrix theory
    Safier, Pedro N.
    Moskowitz, Ira S.
    COMPLEX ADAPTIVE SYSTEMS, 2014, 36 : 401 - +
  • [5] THE DAWN OF QUANTUM NATURAL LANGUAGE PROCESSING
    Di Sipio, Riccardo
    Huang, Jia-Hong
    Chen, Samuel Yen-Chi
    Mangini, Stefano
    Worring, Marcel
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8612 - 8616
  • [6] Generalized ansatz for continuous matrix product states
    Balanzo-Juando, Maria
    De las Cuevas, Gemma
    PHYSICAL REVIEW A, 2020, 101 (05)
  • [7] Backdoors Against Natural Language Processing: A Review
    Li, Shaofeng
    Dong, Tian
    Zhao, Benjamin
    Xue, Jason
    Du, Suguo
    Zhu, Haojin
    IEEE SECURITY & PRIVACY, 2022, 20 (05) : 50 - 59
  • [8] Quantum Classifier for Natural Language Processing Applications
    Pandey, Shyambabu
    Pakray, Partha
    Manna, Riyanka
    COMPUTACION Y SISTEMAS, 2024, 28 (02): : 695 - 700
  • [9] A probabilistic matrix factorization algorithm for approximation of sparse matrices in natural language processing
    Tarantino, Gianmaria
    Monica, Stefania
    Bergenti, Federico
    ICT EXPRESS, 2018, 4 (02): : 87 - 90
  • [10] A quantum search decoder for natural language processing
    Bausch, Johannes
    Subramanian, Sathyawageeswar
    Piddock, Stephen
    QUANTUM MACHINE INTELLIGENCE, 2021, 3 (01)