Delay-Sensitive Energy-Efficient UAV Crowdsensing by Deep Reinforcement Learning

被引:42
|
作者
Dai, Zipeng [1 ]
Liu, Chi Harold [1 ]
Han, Rui [1 ]
Wang, Guoren [1 ]
Leung, Kin K. K. [2 ]
Tang, Jian [3 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
[2] Imperial Coll, Elect & Elect Engn Dept, London SW7 2BT, England
[3] Midea Grp, Beijing 100088, Peoples R China
基金
中国国家自然科学基金;
关键词
Sensors; Task analysis; Crowdsensing; Data collection; Navigation; Delays; Computational modeling; UAV crowdsensing; delay-sensitive applications; energy-efficiency; deep reinforcement learning; TRAJECTORY DESIGN; TASK ASSIGNMENT; DATA-COLLECTION; NAVIGATION;
D O I
10.1109/TMC.2021.3113052
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile crowdsensing (MCS) by unmanned aerial vehicles (UAVs) servicing delay-sensitive applications becomes popular by navigating a group of UAVs to take advantage of their equipped high-precision sensors and durability for data collection in harsh environments. In this paper, we aim to simultaneously maximize collected data amount, geographical fairness, and minimize the energy consumption of all UAVs, as well as to guarantee the data freshness by setting a deadline in each timeslot. Specifically, we propose a centralized control, distributed execution framework by decentralized deep reinforcement learning (DRL) for delay-sensitive and energy-efficient UAV crowdsensing, called "DRL-eFresh". It includes a synchronous computational architecture with GRU sequential modeling to generate multi-UAV navigation decisions. Also, we derive an optimal time allocation solution for data collection while considering all UAV efforts and avoiding much data dropout due to limited data upload time and wireless data rate. Simulation results show that DRL-eFresh significantly improves the energy efficiency, as compared to the best baseline DPPO, by 14% and 22% on average when varying different sensing ranges and number of PoIs, respectively.
引用
收藏
页码:2038 / 2052
页数:15
相关论文
共 50 条
  • [41] Multiagent Deep Reinforcement Learning for Cost- and Delay-Sensitive Virtual Network Function Placement and Routing
    Wang, Shaoyang
    Yuen, Chau
    Ni, Wei
    Guan, Yong Liang
    Lv, Tiejun
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (08) : 5208 - 5224
  • [42] Delay-Sensitive, Reliable, Energy-Efficient, Adaptive and Mobility-Aware (DREAM) Routing Protocol for WSNs
    Suniti Dutt
    Sunil Agrawal
    Renu Vig
    Wireless Personal Communications, 2021, 120 : 1675 - 1703
  • [43] Multi-Armed Bandit for Energy-Efficient and Delay-Sensitive Edge Computing in Dynamic Networks With Uncertainty
    Ghoorchian, Saeed
    Maghsudi, Setareh
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2021, 7 (01) : 279 - 293
  • [44] Slotted Contention-Based Energy-Efficient MAC Protocols in Delay-Sensitive Wireless Sensor Networks
    Doudou, Messaoud
    Djenouri, Djamel
    Badache, Nadjib
    Bouabdallah, Abdelmadjid
    2012 IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS (ISCC), 2012, : 419 - 422
  • [45] Novel Integrated Framework of Unmanned Aerial Vehicle and Road Traffic for Energy-Efficient Delay-Sensitive Delivery
    Liu, Bin
    Ni, Wei
    Liu, Ren Ping
    Zhu, Qi
    Guo, Y. Jay
    Zhu, Hongbo
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (08) : 10692 - 10707
  • [46] Energy-Efficient Provisioning for Service Function Chains to Support Delay-Sensitive Applications in Network Function Virtualization
    Sun, Gang
    Zhou, Run
    Sun, Jian
    Yu, Hongfang
    Vasilakos, Athanasios V.
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 6116 - 6131
  • [47] Energy-Efficient Ultra-Dense Network With Deep Reinforcement Learning
    Ju, Hyungyu
    Kim, Seungnyun
    Kim, Youngjoon
    Shim, Byonghyo
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (08) : 6539 - 6552
  • [48] Energy-efficient heating control for smart buildings with deep reinforcement learning
    Gupta, Anchal
    Badr, Youakim
    Negahban, Ashkan
    Qiu, Robin G.
    JOURNAL OF BUILDING ENGINEERING, 2021, 34
  • [49] Deep Reinforcement Learning for Energy-Efficient Power Control in Heterogeneous Networks
    Peng, Jianhao
    Zheng, Jiabao
    Zhang, Lin
    Xiao, Ming
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 141 - 146
  • [50] Energy-Efficient Parking Analytics System using Deep Reinforcement Learning
    Rezaei, Yoones
    Lee, Stephen
    Mosse, Daniel
    BUILDSYS'21: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILT ENVIRONMENTS, 2021, : 81 - 90