Unveiling bitcoin network attack using deep reinforcement learning with Boltzmann exploration

被引:0
|
作者
Shetty, Monali [1 ]
Tamane, Sharvari [2 ]
机构
[1] MGM Univ, Jawaharlal Nehru Engn Coll, CSE Dept, Aurangabad 431001, Maharashtra, India
[2] MGM Univ, Dept Informat & Commun Technol, Aurangabad 431001, Maharashtra, India
关键词
Blockchain; Bitcoin; Ransomware; Cryptocurrency; Boltzmann exploration; Attack; Reinforcement learning; RANSOMWARE;
D O I
10.1007/s12083-024-01829-1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This study tackles the critical issue of identifying ransomware transactions within the Bitcoin network. These transactions threaten the stability and security of the cryptocurrency world. Traditional machine learning methods struggle to adapt to the evolving tactics employed by ransomware attackers. They rely on predefined features and metrics, limiting their ability to replicate the adaptability of human analysts. To address this challenge and to address the dynamic nature of fraudulent Bitcoin transactions, we propose a novel approach that incorporates Deep Q-Network (DQN) with Boltzmann exploration model that can autonomously learn and identify evolving attack patterns. The proposed Deep Reinforcement Learning (DRL) offers a more flexible approach by mimicking how security experts learn and adjust their strategies. DQN is a type of reinforcement learning that allows the agent to learn through trial-and-error interactions with the environment. Boltzmann exploration is a technique used to balance exploration (trying new actions) and exploitation (taking actions with the highest expected reward) during the learning process. Proposed DQN model with Boltzmann exploration was evaluated in a simulated environment. This strategy emphasizes the importance of dynamic decision-making for achieving convergence and stability during the learning process, ultimately leading to optimized results. The model achieved a promising validation accuracy of 91% and a strong F1 score demonstrating its ability to generalize effectively to unseen data. This is crucial for real-world applications where encountering entirely new attack scenarios is likely. Compared to alternative exploration techniques like Epsilon-Greedy and Random Exploration, Boltzmann exploration led to superior performance on unseen data. This suggests that the Boltzmann temperature parameter effectively guided the agent's exploration-exploitation trade-off, allowing it to discover valuable patterns applicable to new datasets. In conclusion, our findings demonstrate the potential of DQN with Boltzmann exploration for unsupervised ransomware transaction detection in the Bitcoin network. This approach offers a promising solution for improving the security and resilience of Bitcoin networks against evolving ransomware threats.
引用
收藏
页码:20 / 20
页数:1
相关论文
共 50 条
  • [21] A Mechanism for Bitcoin Price Forecasting using Deep Learning
    Ateeq, Karamath
    Al Zarooni, Ahmed Abdelrahim
    Rehman, Abdur
    Khan, Muhammd Adna
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (08) : 441 - 448
  • [22] Optimal Planning of Emergency Communication Network Using Deep Reinforcement Learning
    Yin, Changsheng
    Yang, Ruopeng
    Zhu, Wei
    Zou, Xiaofei
    Zhang, Junda
    IEICE TRANSACTIONS ON COMMUNICATIONS, 2021, E104B (01) : 20 - 26
  • [23] In-Network ACL Rules Placement using Deep Reinforcement Learning
    Zahwa, Wafik
    Lahmadi, Abdelkader
    Rusinowitch, Michael
    Ayadi, Mondher
    2024 IEEE INTERNATIONAL MEDITERRANEAN CONFERENCE ON COMMUNICATIONS AND NETWORKING, MEDITCOM 2024, 2024, : 341 - 346
  • [24] Collaborative Video Caching in the Edge Network using Deep Reinforcement Learning
    Lekharu, Anirban
    Gupta, Pranav
    Sur, Aridit
    Patra, Moumita
    ACM TRANSACTIONS ON INTERNET OF THINGS, 2024, 5 (03):
  • [25] In-network Reinforcement Learning for Attack Mitigation using Programmable Data Plane in SDN
    Ganesan, Aparna
    Sarac, Kamil
    2024 33RD INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS, ICCCN 2024, 2024,
  • [26] Optimal Control of Active Distribution Network using Deep Reinforcement Learning
    Tahir, Yameena
    Khan, Muhammad Faisal Nadeem
    Sajjad, Intisar Ali
    Martirano, Luigi
    2022 IEEE INTERNATIONAL CONFERENCE ON ENVIRONMENT AND ELECTRICAL ENGINEERING AND 2022 IEEE INDUSTRIAL AND COMMERCIAL POWER SYSTEMS EUROPE (EEEIC / I&CPS EUROPE), 2022,
  • [27] Cooperative behavior of a heterogeneous robot team for planetary exploration using deep reinforcement learning
    Barth, Andrew
    Ma, Ou
    ACTA ASTRONAUTICA, 2024, 214 : 689 - 700
  • [28] Deep Learning-Based Community Detection Approach on Bitcoin Network
    Essaid, Meryam
    Ju, Hongteak
    SYSTEMS, 2022, 10 (06):
  • [29] NAEM: Noisy Attention Exploration Module for Deep Reinforcement Learning
    Cai, Zhenwen
    Lee, Feifei
    Hu, Chunyan
    Kotani, Koji
    Chen, Qiu
    IEEE ACCESS, 2021, 9 : 154600 - 154611
  • [30] Environment Exploration for Mapless Navigation based on Deep Reinforcement Learning
    Toan, Nguyen Duc
    Gon-Woo, Kim
    2021 21ST INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2021), 2021, : 17 - 20