Distributed Reinforcement Learning for Age of Information Minimization in Real-Time IoT Systems

被引:30
|
作者
Wang, Sihua [1 ,2 ,3 ]
Chen, Mingzhe [4 ]
Yang, Zhaohui [5 ]
Yin, Changchuan [2 ,3 ]
Saad, Walid [6 ]
Cui, Shuguang [7 ,8 ,9 ,10 ]
Poor, H. Vincent [4 ]
机构
[1] State Key Lab Networking & Switching Technol, Beijing, Peoples R China
[2] Beijing Lab Adv Informat Network, Beijing, Peoples R China
[3] Beijing Univ Posts & Telecommun, Beijing Key Lab Network Syst Architecture & Conve, Beijing 100876, Peoples R China
[4] Princeton Univ, Dept Elect & Comp Engn, Princeton, NJ 08544 USA
[5] UCL, Dept Elect & Elect Engn, London WC1E 6BT, England
[6] Bradley Dept Elect & Comp Engn, Virginia Tech, Wireless VT, Arlington, VA USA
[7] Chinese Univ Hong Kong, Sch Sci & Engn SSE, Shenzhen 518172, Peoples R China
[8] Chinese Univ Hong Kong, Future Network Intelligence Inst FNii, Shenzhen 518172, Peoples R China
[9] Shenzhen Res Inst Big Data, Shenzhen 518172, Peoples R China
[10] Peng Cheng Lab, Shenzhen 518066, Peoples R China
基金
英国工程与自然科学研究理事会; 中国国家自然科学基金; 北京市自然科学基金; 国家重点研发计划;
关键词
Monitoring; Optimization; Internet of Things; Energy consumption; Nonlinear dynamical systems; Vehicle dynamics; Real-time systems; Physical process; sampling frequency; age of information; distributed reinforcement learning; TRAJECTORY DESIGN; INTERNET; UAVS;
D O I
10.1109/JSTSP.2022.3144874
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In the considered model, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamics of the physical process vary over time, each device should find an optimal sampling frequency to sample the real-time dynamics of the physical system and send sampled information to a base station (BS). Due to limited wireless resources, the BS can only select a subset of devices to transmit their sampled information. Thus, edge devices can cooperatively sample their monitored dynamics based on the local observations and the BS will collect the sampled information from the devices immediately, hence avoiding the additional time and energy used for sampling and information transmission. To this end, it is necessary to jointly optimize the sampling policy of each device and the device selection scheme of the BS so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI cost and energy consumption. To solve this problem, we propose a novel distributed reinforcement learning (RL) approach for the sampling policy optimization. The proposed algorithm enables edge devices to cooperatively find the global optimal sampling policy using their own local observations. Given the sampling policy, the device selection scheme can be optimized thus minimizing the weighted sum of AoI and energy consumption of all devices. Simulations with real PM 2.5 pollution data show that the proposed algorithm can reduce the sum of AoI by up to 17.8% and 33.9%, respectively, and the total energy consumption by up to 13.2% and 35.1%, respectively, compared to a conventional deep Q network method and a uniform sampling policy.
引用
收藏
页码:501 / 515
页数:15
相关论文
共 50 条
  • [1] Dynamic Age Minimization With Real-Time Information Preprocessing for Edge-Assisted IoT Devices With Energy Harvesting
    Ling, Xiaoling
    Gong, Jie
    Li, Rui
    Yu, Shuai
    Ma, Qian
    Chen, Xu
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (03): : 2288 - 2300
  • [2] Machine Learning in Real-Time Internet of Things (IoT) Systems: A Survey
    Bian, Jiang
    Al Arafat, Abdullah
    Xiong, Haoyi
    Li, Jing
    Li, Li
    Chen, Hongyang
    Wang, Jun
    Dou, Dejing
    Guo, Zhishan
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (11) : 8364 - 8386
  • [3] Real-Time Asynchronous Information Processing in Distributed Power Systems Control
    Cintuglu, Mehmet H.
    Ishchenko, Dmitry
    IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (01) : 773 - 782
  • [4] Visually-defined Real-Time Orchestration of IoT Systems
    Silva, Margarida
    Dias, Joao Pedro
    Restivo, Andre
    Ferreira, Hugo Sereno
    PROCEEDINGS OF THE 17TH EAI INTERNATIONAL CONFERENCE ON MOBILE AND UBIQUITOUS SYSTEMS: COMPUTING, NETWORKING AND SERVICES (MOBIQUITOUS 2020), 2021, : 225 - 235
  • [5] Distributed Real-Time Scheduling in Cloud Manufacturing by Deep Reinforcement Learning
    Zhang, Lixiang
    Yang, Chen
    Yan, Yan
    Hu, Yaoguang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (12) : 8999 - 9007
  • [6] Distributed Real-Time IoT for Autonomous Vehicles
    Philip, Bigi Varghese
    Alpcan, Tansu
    Jin, Jiong
    Palaniswami, Marimuthu
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (02) : 1131 - 1140
  • [7] Deep Reinforcement Learning for Resource Protection and Real-Time Detection in IoT Environment
    Liang, Wei
    Huang, Weihong
    Long, Jing
    Zhang, Ke
    Li, Kuan-Ching
    Zhang, Dafang
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07) : 6392 - 6401
  • [8] Distributed Reinforcement Learning for Real-Time Batteries Control Using Lagrangian Decomposition
    Stai, Eleni
    Stanojev, Ognjen
    di Prata, Riccardo de Nardis
    Hug, Gabriela
    2022 INTERNATIONAL CONFERENCE ON SMART ENERGY SYSTEMS AND TECHNOLOGIES, SEST, 2022,
  • [9] Real-Time Virtual Machine Scheduling in Industry IoT Network: A Reinforcement Learning Method
    Ma, Xiaojin
    Xu, Huahu
    Gao, Honghao
    Bian, Minjie
    Hussain, Walayat
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (02) : 2129 - 2139
  • [10] Hybrid DVFS Scheduling for Real-Time Systems Based on Reinforcement Learning
    Muhammad, Fakhruddin
    ul Islam, Mahbub
    Lin, Man
    IEEE SYSTEMS JOURNAL, 2017, 11 (02): : 931 - 940