Human-centered Assessment of Automated Tools for Improved Cyber Situational Awareness

被引:0
作者
Strickson, Benjamin [1 ]
Worsley, Cameron [1 ]
Bertram, Stewart [1 ]
机构
[1] Elemendar, London, England
来源
2023 15TH INTERNATIONAL CONFERENCE ON CYBER CONFLICT, CYCON | 2023年
关键词
human-centered AI; cyber situational awareness; autonomous capabilities;
D O I
10.23919/CYCON58705.2023.10181567
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Attempts to deploy autonomous capabilities, including artificial intelligence (AI), within cybersecurity workflows have been met with an implementation challenge. Often the impediment is the ability of software engineers to assess and quantify the benefits of machine learning (ML) models for cyber analysts. We present a case study demonstrating the successful testing and improvement of an ML tool through human-centered assessments. For the benefit of researchers in this field, we detail our own wargaming environment, which was tested using members of a government intelligence community. The participants were presented with two cybersecurity tasks: report annotation and a situational awareness assessment. Both of these tasks were statistically assessed for the difference between task completion with and without access to automation tools. Our first experiment - report annotation - showed a task improvement of +14.0 ppts in recall and +9.19 ppts in precision; there was an overall significant positive difference in f1 values for the ML subjects (p < 0.01). Our second experiment - cyber situational awareness (CSA) - showed a 66.7% improvement in user scores and a significant positive difference for the ML subjects (p < 0.01). The conclusions of our work focus on the need to rebalance the attention of software engineers away from quantitative metrics and toward qualitative analyst feedback derived from realistic wargame testing frameworks. We believe that sharing our wargame scenario here will allow other organizations to either adopt the same testing methodology or, alternatively, share their own CSA testing framework. Ultimately, we are hoping for a more open dialogue between researchers working across the cyber industry and government intelligence agencies.
引用
收藏
页码:273 / 286
页数:14
相关论文
共 10 条
[1]  
Bouwman X, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P433
[2]  
Clarke V., 2015, QUALITATIVE PSYCHOL, V12, P297, DOI [DOI 10.1007/978-94-007-0753-5, 10.1007/978-94-007-0753-5]
[3]   A Comparative Study of Deep Learning based Named Entity Recognition Algorithms for Cybersecurity [J].
Dasgupta, Soham ;
Piplai, Aritran ;
Kotal, Anantaa ;
Joshi, Anupam .
2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, :2596-2604
[4]  
Dorton S. L., 2021, CISC VIS NETW IND GL
[5]   Quality Evaluation of Cyber Threat Intelligence Feeds [J].
Griffioen, Harm ;
Booij, Tim ;
Doerr, Christian .
APPLIED CRYPTOGRAPHY AND NETWORK SECURITY (ACNS 2020), PT II, 2020, 12147 :277-296
[6]  
Lif P, 2017, 2017 INTERNATIONAL CONFERENCE ON CYBER SITUATIONAL AWARENESS, DATA ANALYTICS AND ASSESSMENT (CYBER SA), DOI 10.1109/CyberSA.2017.8073388
[7]   Cyber Threat Intelligence: A Product Without a Process? [J].
Oosthoek, Kris ;
Doerr, Christian .
INTERNATIONAL JOURNAL OF INTELLIGENCE AND COUNTERINTELLIGENCE, 2021, 34 (02) :300-315
[8]   Measuring Situation Awareness in complex systems: Comparison of measures study [J].
Salmon, Paul M. ;
Stanton, Neville A. ;
Walker, Guy H. ;
Jenkins, Daniel ;
Ladva, Darshna ;
Rafferty, Laura ;
Young, Mark .
INTERNATIONAL JOURNAL OF INDUSTRIAL ERGONOMICS, 2009, 39 (03) :490-500
[9]   Measuring and visualizing cyber threat intelligence quality [J].
Schlette, Daniel ;
Boehm, Fabian ;
Caselli, Marco ;
Pernul, Guenther .
INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2021, 20 (01) :21-38
[10]  
Wu H, 2020, PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), P1370, DOI [10.1109/itnec48623.2020.9085102, 10.1109/ITNEC48623.2020.9085102]