Large language models for cyber resilience: A comprehensive review, challenges, and future perspectives

被引:0
作者
Ding, Weiping [1 ,2 ]
Abdel-Basset, Mohamed [3 ]
Ali, Ahmed M. [3 ]
Moustafa, Nour [4 ]
机构
[1] Nantong Univ, Sch Artificial Intelligence & Comp Sci, Nantong 226019, Peoples R China
[2] City Univ Macau, Fac Data Sci, Taipa 999078, Macau, Peoples R China
[3] Zagazig Univ, Fac Comp & Informat, Dept Comp Sci, Zagazig 44519, Egypt
[4] Univ New South Wales ADFA, Sch Syst & Comp, Canberra, ACT 2612, Australia
关键词
Large Language Model; Cyber Resilience; Cyber Security; Data Privacy and Protection; Network and Endpoint Security; SECURITY; AUTOMATION; ATTACKS; DESIGN;
D O I
10.1016/j.asoc.2024.112663
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Interconnect cyber system is used by various users and organizations worldwide to perform different activities. These activities are combined with digital information and systems around the organizations to obtain higher accuracy and performance. However, these combinations of activities have faced cyber threats and attacks by single or multiple attackers. So, protecting and saving users' and organizations' sensitive data is a big challenge. So, the cyber resilience concept refers to the ability to prepare, absorb, recover, and adapt against cyberattacks and threats. It is used to mitigate cyberattacks and risks by the ability of the system to recover from threats. Artificial intelligence models enhance cyber resilience using machine learning and deep learning models. One of the most common components of artificial intelligence is large language models (LLM). It is used to understand language from text data and extract features to predict future words or missing in text datasets. LLM can enhance cyber resilience by providing various benefits for users and organizations. We divide the cyber resilience strategies into five parts. We review the LLM in each part, including security posture, data privacy and protection, security awareness, network security, and security automation. The fundamentals of LLMs are introduced as pretrained models, transformers, encoders, and decoders. Then, we review the challenges of LLM in cyber resilience and cyber defense methods to overcome these challenges. We applied the LLM into three case studies including two for email spam text classifications and one for cyber threat detection. We obtained higher accuracy including 96.67 %, 90.70 %, and 89.94 % from three case studies respectively. Then we compared our LLM with other traditional machine learning models. The results show the LLM has higher accuracy, precision, recall, and f1 score compared with other models. Finally, the future directions of LLM in cyber resilience are provided.
引用
收藏
页数:29
相关论文
共 254 条
[91]  
Huang Y., 2024, P 61 ACM IEEE DES AU, P1
[92]   Cyber Security Threats and Vulnerabilities: A Systematic Mapping Study [J].
Humayun, Mamoona ;
Niazi, Mahmood ;
Jhanjhi, N. Z. ;
Alshayeb, Mohammad ;
Mahmood, Sajjad .
ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2020, 45 (04) :3171-3189
[93]  
Iqbal U., 2024, P 7 AAAI ACM C AI ET, P611
[94]  
Jagannatha A., 2021, arXiv, DOI 10.48550/arXiv.2104.08305
[95]  
Jain N, 2023, Arxiv, DOI [arXiv:2309.00614, 10.48550/arXiv]
[96]  
Jamal S, 2023, Arxiv, DOI arXiv:2311.04913
[97]   Low-Parameter Federated Learning with Large Language Models [J].
Jiang, Jingang ;
Jiang, Haiqi ;
Ma, Yuhan ;
Liu, Xiangyang ;
Fan, Chenyou .
WEB INFORMATION SYSTEMS AND APPLICATIONS, WISA 2024, 2024, 14883 :319-330
[98]  
Jiang Yuxuan, 2024, 2024 IEEE/ACM 46th International Conference on Software Engineering (ICSE), P1121, DOI 10.1145/3597503.3639081
[99]  
Kaddoura Sanaa, 2020, 2020 IEEE 29th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), P193, DOI 10.1109/WETICE49692.2020.00045
[100]   ChatGPT for good? On opportunities and challenges of large language models for education [J].
Kasneci, Enkelejda ;
Sessler, Kathrin ;
Kuechemann, Stefan ;
Bannert, Maria ;
Dementieva, Daryna ;
Fischer, Frank ;
Gasser, Urs ;
Groh, Georg ;
Guennemann, Stephan ;
Huellermeier, Eyke ;
Krusche, Stepha ;
Kutyniok, Gitta ;
Michaeli, Tilman ;
Nerdel, Claudia ;
Pfeffer, Juergen ;
Poquet, Oleksandra ;
Sailer, Michael ;
Schmidt, Albrecht ;
Seidel, Tina ;
Stadler, Matthias ;
Weller, Jochen ;
Kuhn, Jochen ;
Kasneci, Gjergji .
LEARNING AND INDIVIDUAL DIFFERENCES, 2023, 103