Distinguishing word identity and sequence context in DNA language models

被引:0
|
作者
Sanabria, Melissa [1 ]
Hirsch, Jonas [1 ]
Poetsch, Anna R. [1 ,2 ]
机构
[1] Tech Univ Dresden, Biotechnol Ctr, Ctr Mol & Cellular Bioengn, Biomed Genom, Dresden, Germany
[2] German Canc Res Ctr, Natl Ctr Tumor Dis, Partner site Dresden, Dresden, Germany
来源
BMC BIOINFORMATICS | 2024年 / 25卷 / 01期
关键词
DNA language models; Genomics; Deep Learning; Knowledge Representation; Foundation Models; AI and Biology;
D O I
10.1186/s12859-024-05869-5
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Transformer-based large language models (LLMs) are very suited for biological sequence data, because of analogies to natural language. Complex relationships can be learned, because a concept of "words" can be generated through tokenization. Training the models with masked token prediction, they learn both token sequence identity and larger sequence context. We developed methodology to interrogate model learning, which is both relevant for the interpretability of the model and to evaluate its potential for specific tasks. We used DNABERT, a DNA language model trained on the human genome with overlapping k-mers as tokens. To gain insight into the model ' s learning, we interrogated how the model performs predictions, extracted token embeddings, and defined a fine-tuning benchmarking task to predict the next tokens of different sizes without overlaps. This task evaluates foundation models without interrogating specific genome biology, it does not depend on tokenization strategies, vocabulary size, the dictionary, or the number of training parameters. Lastly, there is no leakage of information from token identity into the prediction task, which makes it particularly useful to evaluate the learning of sequence context. We discovered that the model with overlapping k-mers struggles to learn larger sequence context. Instead, the learned embeddings largely represent token sequence. Still, good performance is achieved for genome-biology-inspired fine-tuning tasks. Models with overlapping tokens may be used for tasks where a larger sequence context is of less relevance, but the token sequence directly represents the desired learning features. This emphasizes the need to interrogate knowledge representation in biological LLMs.
引用
收藏
页数:12
相关论文
共 50 条
  • [2] Language Models as Context-sensitive Word Search Engines
    Wiegmann, Matti
    Voelske, Michael
    Stein, Benno
    Potthast, Martin
    PROCEEDINGS OF THE FIRST WORKSHOP ON INTELLIGENT AND INTERACTIVE WRITING ASSISTANTS (IN2WRITING 2022), 2022, : 39 - 45
  • [3] WINODICT: Probing language models for in-context word acquisition
    Eisenschlos, Julian Martin
    Cole, Jeremy R.
    Liu, Fangyu
    Cohen, William W.
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 94 - 102
  • [4] DNA language model GROVER learns sequence context in the human genome
    Sanabria, Melissa
    Hirsch, Jonas
    Joubert, Pierre M.
    Poetsch, Anna R.
    NATURE MACHINE INTELLIGENCE, 2024, 6 (08) : 911 - 923
  • [5] Variable Order Finite-Context Models in DNA Sequence Coding
    Martins, Daniel A.
    Neves, Antonio J. R.
    Pinho, Armando J.
    PATTERN RECOGNITION AND IMAGE ANALYSIS, PROCEEDINGS, 2009, 5524 : 457 - 464
  • [6] Probabilistic models of DNA sequence evolution with context dependent rates of substitution
    Jensen, JL
    Pedersen, AMK
    ADVANCES IN APPLIED PROBABILITY, 2000, 32 (02) : 499 - 517
  • [7] CONTEXT INDUCED MERGING OF SYNONYMOUS WORD MODELS IN COMPUTATIONAL MODELING OF EARLY LANGUAGE ACQUISITION
    Rasanen, Okko
    2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2012, : 5037 - 5040
  • [8] The Effects of Word Identity, Case, and SOA on Word Priming in a Subliminal Context
    Peel, Hayden J.
    Royals, Kayla A.
    Chouinard, Philippe A.
    JOURNAL OF PSYCHOLINGUISTIC RESEARCH, 2022, 51 (01) : 1 - 15
  • [9] The Effects of Word Identity, Case, and SOA on Word Priming in a Subliminal Context
    Hayden J. Peel
    Kayla A. Royals
    Philippe A. Chouinard
    Journal of Psycholinguistic Research, 2022, 51 : 1 - 15
  • [10] Identity, cognition, and language in intergroup context
    Hogg, MA
    JOURNAL OF LANGUAGE AND SOCIAL PSYCHOLOGY, 1996, 15 (03) : 372 - 384