Dynamic Scheduling of Cybersecurity Analysts for Minimizing Risk Using Reinforcement Learning

被引:38
作者
Ganesan, Rajesh [1 ]
Jajodia, Sushil [2 ]
Shah, Ankit [2 ]
Cam, Hasan [3 ]
机构
[1] George Mason Univ, Dept Syst Engn & Operat Res, Mail Stop 4A6, Fairfax, VA 22030 USA
[2] George Mason Univ, Ctr Secure Informat Syst, Mail Stop 5B5, Fairfax, VA 22030 USA
[3] Army Res Lab, 2800 Powder Mill Rd, Adelphi, MD 20783 USA
基金
美国国家科学基金会;
关键词
Cybersecurity; Analysts; Dynamic Scheduling; Cybersecurity analysts; dynamic scheduling; genetic algorithm; integer programming; optimization; reinforcement learning; resource allocation; risk mitigation;
D O I
10.1145/2882969
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An important component of the cyber-defense mechanism is the adequate staffing levels of its cybersecurity analyst workforce and their optimal assignment to sensors for investigating the dynamic alert traffic. The ever-increasing cybersecurity threats faced by today's digital systems require a strong cyber-defense mechanism that is both reactive in its response to mitigate the known risk and proactive in being prepared for handling the unknown risks. In order to be proactive for handling the unknown risks, the above workforce must be scheduled dynamically so the system is adaptive to meet the day-to-day stochastic demands on its workforce (both size and expertise mix). The stochastic demands on the workforce stem from the varying alert generation and their significance rate, which causes an uncertainty for the cybersecurity analyst scheduler that is attempting to schedule analysts for work and allocate sensors to analysts. Sensor data are analyzed by automatic processing systems, and alerts are generated. A portion of these alerts is categorized to be significant, which requires thorough examination by a cybersecurity analyst. Risk, in this article, is defined as the percentage of significant alerts that are not thoroughly analyzed by analysts. In order to minimize risk, it is imperative that the cyber-defense system accurately estimates the future significant alert generation rate and dynamically schedules its workforce to meet the stochastic workload demand to analyze them. The article presents a reinforcement learning-based stochastic dynamic programming optimization model that incorporates the above estimates of future alert rates and responds by dynamically scheduling cybersecurity analysts to minimize risk (i.e., maximize significant alert coverage by analysts) and maintain the risk under a pre-determined upper bound. The article tests the dynamic optimization model and compares the results to an integer programming model that optimizes the static staffing needs based on a daily-average alert generation rate with no estimation of future alert rates (static workforce model). Results indicate that over a finite planning horizon, the learning-based optimization model, through a dynamic (on-call) workforce in addition to the static workforce, (a) is capable of balancing risk between days and reducing overall risk better than the static model, (b) is scalable and capable of identifying the quantity and the right mix of analyst expertise in an organization, and (c) is able to determine their dynamic (on-call) schedule and their sensor-to-analyst allocation in order to maintain risk below a given upper bound. Several meta-principles are presented, which are derived from the optimization model, and they further serve as guiding principles for hiring and scheduling cybersecurity analysts. Days-off scheduling was performed to determine analyst weekly work schedules that met the cybersecurity system's workforce constraints and requirements.
引用
收藏
页码:1 / 21
页数:21
相关论文
共 31 条
[1]   Discovering the Top-k Unexplained Sequences in Time-Stamped Observation Data [J].
Albanese, Massimiliano ;
Molinaro, Cristian ;
Persia, Fabio ;
Picariello, Antonio ;
Subrahmanian, V. S. .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2014, 26 (03) :577-594
[2]  
[Anonymous], 1980, COMPUTER SECURITY TH
[3]  
[Anonymous], 1979, LINEAR MULTIVARIABLE
[4]  
[Anonymous], 2009, METAHEURISTICS
[5]  
[Anonymous], 2003, Simulation-Based Optimization: Parametric Optimization Tech- niques Reinforcement Learning
[6]  
[Anonymous], 2007, Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)
[7]   Dynamic job-shop scheduling using reinforcement learning agents [J].
Aydin, ME ;
Öztemel, E .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2000, 33 (2-3) :169-178
[8]  
BARBARA D, 2002, ADV INFORM SECURITY, V6
[9]  
Bellman R. E., 1957, Dynamic programming. Princeton landmarks in mathematics
[10]  
Chen Der-San, 2010, APPL INTEGER PROGRAM