Delay-Sensitive Energy-Efficient UAV Crowdsensing by Deep Reinforcement Learning

被引:42
|
作者
Dai, Zipeng [1 ]
Liu, Chi Harold [1 ]
Han, Rui [1 ]
Wang, Guoren [1 ]
Leung, Kin K. K. [2 ]
Tang, Jian [3 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
[2] Imperial Coll, Elect & Elect Engn Dept, London SW7 2BT, England
[3] Midea Grp, Beijing 100088, Peoples R China
基金
中国国家自然科学基金;
关键词
Sensors; Task analysis; Crowdsensing; Data collection; Navigation; Delays; Computational modeling; UAV crowdsensing; delay-sensitive applications; energy-efficiency; deep reinforcement learning; TRAJECTORY DESIGN; TASK ASSIGNMENT; DATA-COLLECTION; NAVIGATION;
D O I
10.1109/TMC.2021.3113052
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile crowdsensing (MCS) by unmanned aerial vehicles (UAVs) servicing delay-sensitive applications becomes popular by navigating a group of UAVs to take advantage of their equipped high-precision sensors and durability for data collection in harsh environments. In this paper, we aim to simultaneously maximize collected data amount, geographical fairness, and minimize the energy consumption of all UAVs, as well as to guarantee the data freshness by setting a deadline in each timeslot. Specifically, we propose a centralized control, distributed execution framework by decentralized deep reinforcement learning (DRL) for delay-sensitive and energy-efficient UAV crowdsensing, called "DRL-eFresh". It includes a synchronous computational architecture with GRU sequential modeling to generate multi-UAV navigation decisions. Also, we derive an optimal time allocation solution for data collection while considering all UAV efforts and avoiding much data dropout due to limited data upload time and wireless data rate. Simulation results show that DRL-eFresh significantly improves the energy efficiency, as compared to the best baseline DPPO, by 14% and 22% on average when varying different sensing ranges and number of PoIs, respectively.
引用
收藏
页码:2038 / 2052
页数:15
相关论文
共 50 条
  • [31] An Energy-Efficient Hardware Accelerator for Hierarchical Deep Reinforcement Learning
    Shiri, Aidin
    Prakash, Bharat
    Mazumder, Arnab Neelim
    Waytowich, Nicholas R.
    Oates, Tim
    Mohsenin, Tinoosh
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [32] Energy-efficient VM scheduling based on deep reinforcement learning
    Wang, Bin
    Liu, Fagui
    Lin, Weiwei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 125 : 616 - 628
  • [33] Energy-Efficient IoT Sensor Calibration With Deep Reinforcement Learning
    Ashiquzzaman, Akm
    Lee, Hyunmin
    Um, Tai-Won
    Kim, Jinsul
    IEEE ACCESS, 2020, 8 : 97045 - 97055
  • [34] Energy-efficient reliability-aware offloading for delay-sensitive tasks in collaborative edge computing
    Li, Zengpeng
    Yu, Huiqun
    Fan, Guisheng
    Zhang, Jiayin
    Xu, Jin
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (13):
  • [35] High-Performance UAV Crowdsensing: A Deep Reinforcement Learning Approach
    Wei, Kaimin
    Huang, Kai
    Wu, Yongdong
    Li, Zhetao
    He, Hongliang
    Zhang, Jilian
    Chen, Jinpeng
    Guo, Song
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (19) : 18487 - 18499
  • [36] Energy-Efficient Admission of Delay-Sensitive Tasks for multi-Mobile Edge Computing Servers
    Wang, Jianrong
    Yue, Yuanzhi
    Wang, Ru
    Yu, Mei
    Yu, Jian
    Liu, Hongwei
    Ying, Xiang
    Yu, Ruiguo
    2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 747 - 753
  • [37] Accelerated Structure-Aware Reinforcement Learning for Delay-Sensitive Energy Harvesting Wireless Sensors
    Sharma, Nikhilesh
    Mastronarde, Nicholas
    Chakareski, Jacob
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 : 1409 - 1424
  • [38] Energy-Efficient UAV Communications with Interference Management: Deep Learning Framework
    Ghavimi, Fayezeh
    Jantti, Riku
    2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE WORKSHOPS (WCNCW), 2020,
  • [39] Deep Reinforcement Learning for Energy-Efficient Fresh Data Collection in Rechargeable UAV-assisted IoT Networks
    Yi, Mengjie
    Wang, Xijun
    Liu, Juan
    Zhang, Yan
    Hou, Ronghui
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [40] Energy-efficient UAV-enabled computation offloading for industrial internet of things: a deep reinforcement learning approach
    Shi, Shuo
    Wang, Meng
    Gu, Shushi
    Zheng, Zhong
    WIRELESS NETWORKS, 2024, 30 (05) : 3921 - 3934