RNA structure prediction using deep learning — A comprehensive review

被引:3
作者
Chaturvedi, Mayank [1 ]
Rashid, Mahmood A. [1 ]
Paliwal, Kuldip K. [1 ]
机构
[1] Signal Processing Laboratory, School of Engineering and Built Environment, Griffith University, Brisbane, 4111, QLD
基金
澳大利亚研究理事会;
关键词
Deep learning; Feature extraction; Machine learning; Neural networks; RNA secondary structure prediction; Transformers;
D O I
10.1016/j.compbiomed.2025.109845
中图分类号
学科分类号
摘要
In computational biology, accurate RNA structure prediction offers several benefits, including facilitating a better understanding of RNA functions and RNA-based drug design. Implementing deep learning techniques for RNA structure prediction has led tremendous progress in this field, resulting in significant improvements in prediction accuracy. This comprehensive review aims to provide an overview of the diverse strategies employed in predicting RNA secondary structures, emphasizing deep learning methods. The article categorizes the discussion into three main dimensions: feature extraction methods, existing state-of-the-art learning model architectures, and prediction approaches. We present a comparative analysis of various techniques and models highlighting their strengths and weaknesses. Finally, we identify gaps in the literature, discuss current challenges, and suggest future approaches to enhance model performance and applicability in RNA structure prediction tasks. This review provides a deeper insight into the subject and paves the way for further progress in this dynamic intersection of life sciences and artificial intelligence. © 2025 The Authors
引用
收藏
相关论文
共 189 条
[101]  
Devlin J., Chang M., Lee K., Toutanova K., BERT: Pre-training of deep bidirectional transformers for language understanding, Proceedings of NAACL-HLT 2019, Minneapolis, MN, USA, June 2–7, 2019, Vol. 1, pp. 4171-4186, (2019)
[102]  
Radford A., Narasimhan K., Salimans T., Sutskever I., Et al., Improving Language Understanding by Generative Pre-Training, (2018)
[103]  
Radford A., Wu J., Child R., Luan D., Amodei D., Sutskever I., Et al., Language models are unsupervised multitask learners, (2019)
[104]  
Brown T., Mann B., Ryder N., Subbiah M., Kaplan J.D., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., Et al., Language models are few-shot learners, Adv. Neural Inf. Process. Syst., 33, pp. 1877-1901, (2020)
[105]  
Raffel C., Shazeer N., Roberts A., Lee K., Narang S., Matena M., Zhou Y., Li W., Liu P.J., Exploring the limits of transfer learning with a unified text-to-text transformer, J. Mach. Learn. Res., 21, 140, pp. 1-67, (2020)
[106]  
Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L., Polosukhin I., Attention is all you need, Adv. Neural Inf. Process. Syst., 30, (2017)
[107]  
Han K., Wang Y., Chen H., Chen X., Guo J., Liu Z., Tang Y., Xiao A., Xu C., Xu Y., Et al., A survey on vision transformer, IEEE Trans. Pattern Anal. Mach. Intell., 45, 1, pp. 87-110, (2022)
[108]  
Ba J.L., Kiros J.R., Hinton G.E., Layer normalization, (2016)
[109]  
Wen Q., Zhou T., Zhang C., Chen W., Ma Z., Yan J., Sun L., Transformers in time series: A survey, Proc. Thirty-Second Int. Joint Conf. Artif. Intell., IJCAI-23, pp. 6778-6786, (2023)
[110]  
Mienye I.D., Jere N., Deep learning for credit card fraud detection: A review of algorithms, challenges, and solutions, IEEE Access, 12, (2024)