Predictability and Causality in Spanish and English Natural Language Generation

被引:0
作者
Busto-Castineira, Andrea [1 ]
Javier Gonzalez-Castano, Francisco [1 ]
Garcia-Mendez, Silvia [1 ]
de Arriba-Perez, Francisco [1 ]
机构
[1] Univ Vigo, atlanTTic Res Ctr Telecommun Technol, Telecommun Engn Sch, Informat Technol Grp, Vigo 36310, Spain
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Transformers; Context modeling; Entropy; Cause effect analysis; Predictive models; Task analysis; Measurement; Natural language processing; Language predictability; natural language generation; non-causal language modeling; Spanish language; transformer language models; LINGUISTICS; TRANSFORMER; SURPRISAL; MODEL;
D O I
10.1109/ACCESS.2024.3420710
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, the field of Natural Language Generation (NLG) has been boosted by the recent advances in deep learning technologies. Nonetheless, these new data-intensive methods introduce language-dependent disparities in NLG as the main training data sets are in English. Also, most neural NLG systems use decoder-only (causal) transformer language models, which work well for English, but were not designed with other languages in mind. In this work we depart from the hypothesis that they may introduce generation bias in target languages with less rigid word ordering, subject omission, or different attachment preferences for relative clauses, so that for these target languages other language generation strategies may be more desirable. This paper first compares causal and non-causal language modeling for English and Spanish, two languages with different grammatical structures and over 1.5 billion and 0.5 billion speakers, respectively. For this purpose, we define a novel metric of average causal and non-causal context-conditioned entropy of the grammatical category distribution for both languages as an information-theoretic a priori approach. The evaluation of natural text sources (such as training data) in both languages reveals lower average non-causal conditional entropy in Spanish and lower causal conditional entropy in English. According to this experiment, Spanish is more predictable than English given a non-causal context. Then, by applying a conditional relative entropy metric to text generation experiments, we obtain as insights that the best performance is respectively achieved with causal NLG in English, and with non-causal NLG in Spanish. These insights support further research in NLG in Spanish using bidirectional transformer language models.
引用
收藏
页码:132521 / 132532
页数:12
相关论文
共 55 条
  • [11] TYDI QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages
    Clark, Jonathan H.
    Choi, Eunsol
    Collins, Michael
    Garrette, Dan
    Kwiatkowski, Tom
    Nikolaev, Vitaly
    Palomaki, Jennimaria
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2020, 8 : 454 - 470
  • [12] Davis F, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P1979
  • [13] Erdem E, 2022, J ARTIF INTELL RES, V73, P1131
  • [14] Cross-Linguistic Differences in Processing Double-Embedded Relative Clauses: Working-Memory Constraints or Language Statistics?
    Frank, Stefan L.
    Trompenaars, Thijs
    Vasishth, Shravan
    [J]. COGNITIVE SCIENCE, 2016, 40 (03) : 554 - 578
  • [15] Lossy-Context Surprisal: An Information-Theoretic Model of Memory Effects in Sentence Processing
    Futrell, Richard
    Gibson, Edward
    Levy, Roger P.
    [J]. COGNITIVE SCIENCE, 2020, 44 (03)
  • [16] A Survey on Bias in Deep NLP
    Garrido-Munoz, Ismael
    Montejo-Raez, Arturo
    Martinez-Santiago, Fernando
    Urena-Lopez, L. Alfonso
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (07):
  • [17] Linguistic complexity: locality of syntactic dependencies
    Gibson, E
    [J]. COGNITION, 1998, 68 (01) : 1 - 76
  • [18] BERT syntactic transfer: A computational experiment on Italian, French and English languages
    Guarasci, Raffaele
    Silvestri, Stefano
    De Pietro, Giuseppe
    Fujita, Hamido
    Esposito, Massimo
    [J]. COMPUTER SPEECH AND LANGUAGE, 2022, 71
  • [19] MarIA: Spanish Language Models
    Gutierrez-Fandino, Asier
    Armengol-Estape, Jordi
    Pamies, Marc
    Llop-Palao, Joan
    Silveira-Ocampo, Joaquin
    Carrino, Casimiro Pio
    Armentano-Oller, Carme
    Rodriguez-Penagos, Carlos
    Gonzalez-Agirre, Aitor
    Villegas, Marta
    [J]. PROCESAMIENTO DEL LENGUAJE NATURAL, 2022, (68): : 39 - 60
  • [20] Hale J, 2001, 2ND MEETING OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, PROCEEDINGS OF THE CONFERENCE, P159