Energy-Efficient Cooperative Secure Communications in mmWave Vehicular Networks Using Deep Recurrent Reinforcement Learning

被引:6
|
作者
Ju, Ying [1 ]
Gao, Zipeng [1 ]
Wang, Haoyu [2 ]
Liu, Lei [1 ]
Pei, Qingqi [1 ]
Dong, Mianxiong [3 ]
Mumtaz, Shahid [4 ,5 ]
Leung, Victor C. M. [6 ,7 ,8 ]
机构
[1] Xidian Univ, Sch Telecommun Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[2] Univ Calif Irvine, Ctr Pervas Commun & Comp, Irvine, CA 92697 USA
[3] Muroran Inst Technol, Dept Informat & Elect Engn, Muroran 0508585, Japan
[4] Silesian Tech Univ, Dept Appl Informat, PL-44100 Gliwice, Poland
[5] Nottingham Trent Univ, Dept Comp Sci, Nottingham NG1 4FQ, England
[6] Shenzhen MSU BIT Univ, Artificial Intelligence Res Inst, Shenzhen 518172, Peoples R China
[7] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
[8] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC V6T 1Z4, Canada
基金
中国国家自然科学基金;
关键词
MmWave vehicular communication; energy consumption; cooperative secure transmission; physical layer security; deep recurrent reinforcement learning; PHYSICAL-LAYER SECURITY; MIMO; TRANSMISSIONS;
D O I
10.1109/TITS.2024.3394130
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Millimeter wave (mmWave) with abundant spectrum resources can realize high-rate communications in vehicular networks. However, the mobility of vehicles and the blocking effect of mmWave propagation bring new challenges to communication security. Cooperative communication is envisioned as a promising physical layer security (PLS) approach to enhance the secrecy performance, but it will induce extra energy consumption of vehicles. This paper proposes a deep recurrent reinforcement learning (DRRL)-based energy-efficient cooperative secure transmission scheme in mmWave vehicular networks, where eavesdropping vehicles attempt to intercept the multi-user downlink communications. We jointly design the mmWave beam allocation, the cooperative nodes selection, and the transmit power of vehicles. Specifically, the mmWave base station selects idle vehicles as relays to overcome the severe blocking attenuation of legitimate transmissions and controls the transmit power to reduce energy consumption. Moreover, to ensure secure transmission, a cooperative vehicle is selected to transmit jamming signals to the eavesdropping vehicles while the legitimate users are not disturbed. We conduct comprehensive interference analysis for both direct transmission and relay-aided transmission, and derive the theoretical expressions for the secrecy capacity. We then design the Dueling Double Deep Recurrent Q-Network (D3RQN) learning algorithm to maximize the total secrecy capacity subject to the energy consumption constraint. We set the energy consumption punishment mechanism to avoid relay vehicles consuming too much power for forwarding signals. We demonstrate that the proposed scheme can rapidly adapt to the highly dynamic vehicular networks and effectively improve secrecy performance while reducing the energy consumption of vehicles.
引用
收藏
页码:14460 / 14475
页数:16
相关论文
共 50 条
  • [31] A Deep Reinforcement Approach for Energy-Efficient Resource Assignment in Cooperative NOMA-Enhanced Cellular Networks
    Guo, Yan-Yan
    Tan, Xiao-Long
    Gao, Yun
    Yang, Jing
    Rui, Zhi-Chao
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (14) : 12690 - 12702
  • [32] CAFEEN: A Cooperative Approach for Energy-Efficient NoCs With Multiagent Reinforcement Learning
    Khan, Kamil
    Pasricha, Sudeep
    IEEE DESIGN & TEST, 2025, 42 (02) : 71 - 78
  • [33] An Energy-Efficient Hardware Accelerator for Hierarchical Deep Reinforcement Learning
    Shiri, Aidin
    Prakash, Bharat
    Mazumder, Arnab Neelim
    Waytowich, Nicholas R.
    Oates, Tim
    Mohsenin, Tinoosh
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [34] Energy-efficient VM scheduling based on deep reinforcement learning
    Wang, Bin
    Liu, Fagui
    Lin, Weiwei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 125 : 616 - 628
  • [35] Channel Access and Power Control for Energy-Efficient Delay-Aware Heterogeneous Cellular Networks for Smart Grid Communications Using Deep Reinforcement Learning
    Asuhaimi, Fauzun Abdullah
    Bu, Shengrong
    Klaine, Paulo Valente
    Imran, Muhammad Ali
    IEEE ACCESS, 2019, 7 : 133474 - 133484
  • [36] Deep Reinforcement Learning for Energy-Efficient Federated Learning in UAV-Enabled Wireless Powered Networks
    Quang Vinh Do
    Quoc-Viet Pham
    Hwang, Won-Joo
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (01) : 99 - 103
  • [37] Energy-Efficient IoT Sensor Calibration With Deep Reinforcement Learning
    Ashiquzzaman, Akm
    Lee, Hyunmin
    Um, Tai-Won
    Kim, Jinsul
    IEEE ACCESS, 2020, 8 : 97045 - 97055
  • [38] CooperativeQ: Energy-Efficient Channel Access Based on Cooperative Reinforcement Learning
    Emre, Mehmet
    Gur, Gurkan
    Bayhan, Suzan
    Alagoz, Fatih
    2015 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION WORKSHOP (ICCW), 2015, : 2799 - 2805
  • [39] Deep Reinforcement Learning Based UAV for Securing mmWave Communications
    Dong, Runze
    Wang, Buhong
    Tian, Jiwei
    Cheng, Tianhao
    Diao, Danyu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (04) : 5429 - 5434
  • [40] Deep Reinforcement Learning for Secure MEC Service in Vehicular Networks with Reconfigurable Intelligent Surfaces
    Wang, Haoyu
    Bai, Haowen
    Ju, Ying
    Swindlehurst, A. Lee
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 1184 - 1188