Reinforcement Learning-Based Streaming Process Discovery Under Concept Drift

被引:1
|
作者
Cai, Rujian [1 ]
Zheng, Chao [1 ]
Wang, Jian [1 ]
Li, Duantengchuan [1 ]
Wang, Chong [1 ]
Li, Bing [1 ]
机构
[1] Wuhan Univ, Sch Comp Sci, Wuhan, Peoples R China
来源
ADVANCED INFORMATION SYSTEMS ENGINEERING, CAISE 2024 | 2024年 / 14663卷
基金
中国国家自然科学基金;
关键词
Process discovery; Concept drift; Trace stream; Reinforcement learning;
D O I
10.1007/978-3-031-61057-8_4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Streaming process discovery aims to discover a process model that may change over time, coping with the challenges of concept drift in business processes. Existing studies update process models with fixed strategies, neglecting the highly dynamic nature of trace streams. Consequently, they fail to accurately reveal the process evolution caused by concept drift. This paper proposes RLSPD (Reinforcement Learning-based Streaming Process Discovery), a dynamic process discovery approach for constructing an online process model on a trace stream. RLSPD leverages conformance-checking information to characterize trace distribution and employs a reinforcement learning policy to capture fluctuations in the trace stream. Based on the dynamic parameters provided by reinforcement learning, we extract representative trace variants within a memory window using frequency-based sampling and perform concept drift detection. Upon detecting concept drift, the process model is updated by process discovery. Experimental results on real-life event logs demonstrate that our approach effectively adapts to the high dynamics of trace streams, improving the conformance of constructed process models to upcoming traces and reducing erroneous model updates. Additionally, the results highlight the significance of the pre-trained policy in dealing with unknown environments.
引用
收藏
页码:55 / 70
页数:16
相关论文
共 50 条
  • [31] Deep Reinforcement Learning-based Quantization for Federated Learning
    Zheng, Sihui
    Dong, Yuhan
    Chen, Xiang
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [32] Graph learning-based generation of abstractions for reinforcement learning
    Xue, Yuan
    Kudenko, Daniel
    Khosla, Megha
    NEURAL COMPUTING & APPLICATIONS, 2023,
  • [33] Reinforcement Learning-Based Guidance of Autonomous Vehicles
    Clemmons, Joseph
    Jin, Yu-Fang
    2023 24TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED, 2023, : 496 - 501
  • [34] Towards Reinforcement Learning-based Aggregate Computing
    Aguzzi, Gianluca
    Casadei, Roberto
    Viroli, Mirko
    COORDINATION MODELS AND LANGUAGES, 2022, 13271 : 72 - 91
  • [35] Reinforcement learning-based aggregation for robot swarms
    Amjadi, Arash Sadeghi
    Bilaloglu, Cem
    Turgut, Ali Emre
    Na, Seongin
    Sahin, Erol
    Krajnik, Tomas
    Arvin, Farshad
    ADAPTIVE BEHAVIOR, 2024, 32 (03) : 265 - 281
  • [36] Testing the Plasticity of Reinforcement Learning-based Systems
    Biagiola, Matteo
    Tonella, Paolo
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2022, 31 (04)
  • [37] A Reinforcement Learning-Based Routing Protocol in VANETs
    Sun, Yanglong
    Lin, Yiming
    Tang, Yuliang
    COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, 2019, 463 : 2493 - 2500
  • [38] Training reinforcement learning-based controller using performance simulation of the laser remelting process
    Wu, Honghe
    Bordatchev, Evgueni
    Tutunea-Fatan, O. Remus
    5TH INTERNATIONAL CONFERENCE ON INDUSTRY 4.0 AND SMART MANUFACTURING, ISM 2023, 2024, 232 : 1849 - 1858
  • [39] Reinforcement learning-based mobile robot navigation
    Altuntas, Nihal
    Imal, Erkan
    Emanet, Nahit
    Ozturk, Ceyda Nur
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2016, 24 (03) : 1747 - 1767
  • [40] Reinforcement Learning-Based News Recommendation System
    Aboutorab, Hamed
    Hussain, Omar K.
    Saberi, Morteza
    Hussain, Farookh Khadeer
    Prior, Daniel
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (06) : 4493 - 4502