Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks

被引:0
作者
Ballout, Mohamad [1 ]
Krumnack, Ulf [1 ]
Heidemann, Gunther [1 ]
Kuehnberger, Kai-Uwe [1 ]
机构
[1] Univ Osnabruck, Inst Cognit Sci, Osnabruck, Germany
来源
EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT III | 2023年 / 1903卷
关键词
Pre-trained language model; Transformers; XAI; Attention analysis; BERT;
D O I
10.1007/978-3-031-44070-0_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Investigating deep learning language models has always been a significant research area due to the "black box" nature of most advanced models. With the recent advancements in pre-trained language models based on transformers and their increasing integration into daily life, addressing this issue has become more pressing. In order to achieve an explainable AI model, it is essential to comprehend the procedural steps involved and compare them with human thought processes. Thus, in this paper, we use simple, well-understood non-language tasks to explore these models' inner workings. Specifically, we apply a pre-trained language model to constraithmetic problems with hierarchical structure, to analyze their attention weight scores and hidden states. The investigation reveals promising results, with the model addressing hierarchical problems in a moderately structured manner, similar to human problem-solving strategies. Additionally, by inspecting the attention weights layer by layer, we uncover an unconventional finding that layer 10, rather than the model's final layer, is the optimal layer to unfreeze for the least parameter-intensive approach to fine-tune the model. We support these findings with entropy analysis and token embeddings similarity analysis. The attention analysis allows us to hypothesize that the model can generalize to longer sequences in ListOps dataset, a conclusion later confirmed through testing on sequences longer than those in the training set. Lastly, by utilizing a straightforward task in which the model predicts the winner of a Tic Tac Toe game, we identify limitations in attention analysis, particularly its inability to capture 2D patterns.
引用
收藏
页码:3 / 25
页数:23
相关论文
共 26 条
[1]  
Clark K, 2019, Arxiv, DOI arXiv:1906.04341
[2]  
Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
[3]  
Dong LH, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P5884, DOI 10.1109/ICASSP.2018.8462506
[4]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[5]  
Goldberg Y., 2019, arXiv, DOI 10.48550/arXiv.1901.05287
[6]  
Hoover B, 2019, Arxiv, DOI arXiv:1910.05276
[7]  
Jain S, 2019, Arxiv, DOI arXiv:1902.10186
[8]  
Jawahar G, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P3651
[9]  
Kovaleva O, 2019, arXiv
[10]  
Lewis M, 2019, Arxiv, DOI arXiv:1910.13461