Predicting gene expression levels from DNA sequences and post-transcriptional information with transformers

被引:8
作者
Pipoli, Vittorio [2 ]
Cappelli, Mattia [1 ]
Palladini, Alessandro [1 ]
Peluso, Carlo [1 ]
Lovino, Marta [2 ]
Ficarra, Elisa [2 ]
机构
[1] Dept Control & Comp Engn, Corso Duca Abruzzi 24, I-10129 Turin, Piedmont, Italy
[2] Univ Modena & Reggio Emilia, Enzo Ferrari Engn Dept, Via P Vivarelli 10, I-41125 Modena, Emilia Romagna, Italy
基金
欧盟地平线“2020”;
关键词
Attention; DNA; Gene-expression; Prediction; Transcription-factors; Transformers;
D O I
10.1016/j.cmpb.2022.107035
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and objectives: In the latest years, the prediction of gene expression levels has been crucial due to its potential applications in the clinics. In this context, Xpresso and others methods based on Convolutional Neural Networks and Transformers were firstly proposed to this aim. However, all these methods embed data with a standard one-hot encoding algorithm, resulting in impressively sparse matrices. In addition, post-transcriptional regulation processes, which are of uttermost importance in the gene expression process, are not considered in the model. Methods: This paper presents Transformer DeepLncLoc, a novel method to predict the abundance of the mRNA (i.e., gene expression levels) by processing gene promoter sequences, managing the problem as a regression task. The model exploits a transformer-based architecture, introducing the DeepLncLoc method to perform the data embedding. Since DeepLncloc is based on word2vec algorithm, it avoids the sparse matrices problem. Results: Post-transcriptional information related to mRNA stability and transcription factors is included in the model, leading to significantly improved performances compared to the state-of-the-art works. Transformer DeepLncLoc reached 0.76 of R-2 evaluation metric compared to 0.74 of Xpresso. Conclusion: The Multi-Headed Attention mechanisms which characterizes the transformer methodology is suitable for modeling the interactions between DNA's locations, overcoming the recurrent models. Finally, the integration of the transcription factors data in the pipeline leads to impressive gains in predictive power. (C) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:9
相关论文
共 28 条
[11]  
Ioffe Sergey, 2015, P MACHINE LEARNING R, V37, P448
[12]  
KARIN M, 1990, New Biologist, V2, P126
[13]   Sequential regulatory activity prediction across chromosomes with convolutional neural networks [J].
Kelley, David R. ;
Reshef, Yakir A. ;
Bileschi, Maxwell ;
Belanger, David ;
McLean, Cory Y. ;
Snoek, Jasper .
GENOME RESEARCH, 2018, 28 (05) :739-750
[14]  
LATCHMAN DS, 1993, INT J EXP PATHOL, V74, P417
[15]  
Magnusson R., WHITE BOX DEEP NEURA
[16]  
Muhammad Isiaka Ibrahim, RNA SEQ CHIP SEQ COM, DOI 10.3390%2Fi- jms21010167
[17]   A STOCHASTIC APPROXIMATION METHOD [J].
ROBBINS, H ;
MONRO, S .
ANNALS OF MATHEMATICAL STATISTICS, 1951, 22 (03) :400-407
[18]   The harmonizome: a collection of processed datasets gathered to serve and mine knowledge about genes and proteins [J].
Rouillard, Andrew D. ;
Gundersen, Gregory W. ;
Fernandez, Nicolas F. ;
Wang, Zichen ;
Monteiro, Caroline D. ;
McDermott, Michael G. ;
Ma'ayan, Avi .
DATABASE-THE JOURNAL OF BIOLOGICAL DATABASES AND CURATION, 2016,
[19]  
Sammut C., 2010, Mean Squared Error, pp 653
[20]   The Definition of Open Reading Frame Revisited [J].
Sieber, Patricia ;
Platzer, Matthias ;
Schuster, Stefan .
TRENDS IN GENETICS, 2018, 34 (03) :167-170