Large Language Models in Cyberattacks

被引:0
作者
S. V. Lebed [1 ]
D. E. Namiot [2 ]
E. V. Zubareva [1 ]
P. V. Khenkin [1 ]
A. A. Vorobeva [1 ]
D. A. Svichkar [1 ]
机构
[1] Sber, Moscow
[2] Lomonosov Moscow State University, Moscow
关键词
ChatGPT; cybersecurity; LLM; NLP; offensive artificial intelligence;
D O I
10.1134/S1064562425700012
中图分类号
学科分类号
摘要
Abstract: The article provides an overview of the practice of using large language models (LLMs) in cyberattacks. Artificial intelligence models (machine learning and deep learning) are applied across various fields, with cybersecurity being no exception. One aspect of this usage is offensive artificial intelligence, specifically in relation to LLMs. Generative models, including LLMs, have been utilized in cybersecurity for some time, primarily for generating adversarial attacks on machine learning models. The analysis focuses on how LLMs, such as ChatGPT, can be exploited by malicious actors to automate the creation of phishing emails and malware, significantly simplifying and accelerating the process of conducting cyberattacks. Key aspects of LLM usage are examined, including text generation for social engineering attacks and the creation of malicious code. The article is aimed at cybersecurity professionals, researchers, and LLM developers, providing them with insights into the risks associated with the malicious use of these technologies and recommendations for preventing their exploitation as cyber weapons. The research emphasizes the importance of recognizing potential threats and the need for active countermeasures against automated cyberattacks. © Pleiades Publishing, Ltd. 2024.
引用
收藏
页码:S510 / S520
页数:10
相关论文
共 70 条
[1]  
Namiot D.E., Ilyushin E.A., Chizhov I.V., Artificial intelligence and cybersecurity, Int. J. Open Inf. Technol, 10, pp. 135-147, (2022)
[2]  
Namiot D.E., Schemes of attacks on machine learning models, Int. J. Open Inf. Technol, 11, pp. 68-86, (2023)
[3]  
Naveed H., Et al., A comprehensive overview of large language models, (2024)
[4]  
Bender E.M., Gebru T., McMillan-Major A., Shmitchell S., On the dangers of stochastic parrots: Can language models be too big?, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT’21), pp. 610-623, (2021)
[5]  
Arora S., Goyal A., A Theory for Emergence of Complex Skills in Language Models, (2023)
[6]  
Namiot D.E., Zubareva E.V., About AI Red Team, Int. J. Open Inf. Technol, 11, pp. 130-139, (2023)
[7]  
Haider T., Perschl T., Rehbein M., Quantification of Biodiversity from Historical Survey Text with Llm-Based Best-Worst Scaling, (2025)
[8]  
BMK Faculty of Computational Mathematics and Cybernetics, (2024)
[9]  
Sberbank hosted a welcome meeting for first-year master’s students of the Cybersecurity FSES program, Faculty of Computational Mathematics and Cybernetics, (2024)
[10]  
Fayyazi R., Yang S.J., On the Uses of Large Language Models to Interpret Ambiguous Cyberattack Descriptions, (2023)