Predictability and Causality in Spanish and English Natural Language Generation

被引:0
作者
Busto-Castineira, Andrea [1 ]
Javier Gonzalez-Castano, Francisco [1 ]
Garcia-Mendez, Silvia [1 ]
de Arriba-Perez, Francisco [1 ]
机构
[1] Univ Vigo, atlanTTic Res Ctr Telecommun Technol, Telecommun Engn Sch, Informat Technol Grp, Vigo 36310, Spain
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Transformers; Context modeling; Entropy; Cause effect analysis; Predictive models; Task analysis; Measurement; Natural language processing; Language predictability; natural language generation; non-causal language modeling; Spanish language; transformer language models; LINGUISTICS; TRANSFORMER; SURPRISAL; MODEL;
D O I
10.1109/ACCESS.2024.3420710
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, the field of Natural Language Generation (NLG) has been boosted by the recent advances in deep learning technologies. Nonetheless, these new data-intensive methods introduce language-dependent disparities in NLG as the main training data sets are in English. Also, most neural NLG systems use decoder-only (causal) transformer language models, which work well for English, but were not designed with other languages in mind. In this work we depart from the hypothesis that they may introduce generation bias in target languages with less rigid word ordering, subject omission, or different attachment preferences for relative clauses, so that for these target languages other language generation strategies may be more desirable. This paper first compares causal and non-causal language modeling for English and Spanish, two languages with different grammatical structures and over 1.5 billion and 0.5 billion speakers, respectively. For this purpose, we define a novel metric of average causal and non-causal context-conditioned entropy of the grammatical category distribution for both languages as an information-theoretic a priori approach. The evaluation of natural text sources (such as training data) in both languages reveals lower average non-causal conditional entropy in Spanish and lower causal conditional entropy in English. According to this experiment, Spanish is more predictable than English given a non-causal context. Then, by applying a conditional relative entropy metric to text generation experiments, we obtain as insights that the best performance is respectively achieved with causal NLG in English, and with non-causal NLG in Spanish. These insights support further research in NLG in Spanish using bidirectional transformer language models.
引用
收藏
页码:132521 / 132532
页数:12
相关论文
共 55 条
  • [1] Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
    Artetxe, Mikel
    Schwenk, Holger
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2019, 7 : 597 - 610
  • [2] Assaiqeli A., 2021, Stud. English Lang. Educ., V8, P523
  • [3] Fast End-to-End Speech Recognition Via Non-Autoregressive Models and Cross-Modal Knowledge Transferring From BERT
    Bai, Ye
    Yi, Jiangyan
    Tao, Jianhua
    Tian, Zhengkun
    Wen, Zhengqi
    Zhang, Shuai
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 1897 - 1911
  • [4] Domain-Aware Dialogue State Tracker for Multi-Domain Dialogue Systems
    Balaraman, Vevake
    Magnini, Bernardo
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 866 - 873
  • [5] In Search of On-Line Locality Effects in Sentence Comprehension
    Bartek, Brian
    Lewis, Richard L.
    Vasishth, Shravan
    Smith, Mason R.
    [J]. JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION, 2011, 37 (05) : 1178 - 1198
  • [6] READABILITY OF NEWSPAPERS IN 11 LANGUAGES
    BJORNSSON, CH
    [J]. READING RESEARCH QUARTERLY, 1983, 18 (04) : 480 - 497
  • [7] Brown TB, 2020, ADV NEUR IN, V33
  • [8] Towards More Diverse Input Representation for Neural Machine Translation
    Chen, Kehai
    Wang, Rui
    Utiyama, Masao
    Sumita, Eiichiro
    Zhao, Tiejun
    Yang, Muyun
    Zhao, Hai
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 : 1586 - 1597
  • [9] Non-Autoregressive Transformer for Speech Recognition
    Chen, Nanxin
    Watanabe, Shinji
    Villalba, Jesus
    Zelasko, Piotr
    Dehak, Najim
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 121 - 125
  • [10] Canine: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
    Clark, Jonathan H.
    Garrette, Dan
    Turc, Iulia
    Wieting, John
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 73 - 91