Robustness Analysis of Data-Driven Self-Learning Controllers for IoT Environmental Monitoring Nodes based on Q-learning Approaches

被引:1
作者
Paterova, Tereza [1 ]
Prauzek, Michal [1 ]
Konecny, Jaromir [1 ]
机构
[1] VSB Tech Univ Ostrava, Dept Cybernet & Biomed Engn, Ostrava, Czech Republic
来源
2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI) | 2022年
关键词
computational intelligence; embedded intelligence; Q-learning; double Q-learning; machine learning; datadriven; IoT; INTELLIGENCE;
D O I
10.1109/SSCI51031.2022.10022151
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today we are seeing a significant need for efficient control of operating cycles to deliver improvements in services provided by Internet of Things (IoT) devices embedded with environmental monitoring. To design an algorithm which provides sufficient duty-cycle control, we can apply machine learning approaches. The present study investigates the reinforcement learning (RL) algorithm family, especially Q-learning (QL) and Double Q-learning (DQL) algorithms and their suitability for devices deployed in a range of locations. We present a comprehensive analysis of the implemented RL approaches for regulating data-driven self-learning (DDSL) controllers. We tested QL and DQL algorithms on various datasets and evaluated their performance with a statistical analysis. The results indicated that the QL and the DQL approaches were highly dependent on the nature of the environmental parameters which the DDSL controller detected and recorded.
引用
收藏
页码:721 / 727
页数:7
相关论文
共 25 条
[1]  
[Anonymous], 2021, Czech hydrometeorological institute
[2]   Sparse Deep Neural Network Optimization for Embedded Intelligence [J].
Bi, Jia ;
Gunn, Steve R. .
INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2020, 29 (3-4)
[3]   Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment [J].
Bin Issa, Razin ;
Das, Modhumonty ;
Rahman, Md. Saferi ;
Barua, Monika ;
Rhaman, Md. Khalilur ;
Ripon, Kazi Shah Nawaz ;
Alam, Md. Golam Rabiul .
SENSORS, 2021, 21 (04) :1-24
[4]   Intelligence and Learning in O-RAN for Data-Driven NextG Cellular Networks [J].
Bonati, Leonardo ;
D'Oro, Salvatore ;
Polese, Michele ;
Basagni, Stefano ;
Melodia, Tommaso .
IEEE COMMUNICATIONS MAGAZINE, 2021, 59 (10) :21-27
[5]   Q-Learning: Theory and Applications [J].
Clifton, Jesse ;
Laber, Eric .
ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION, VOL 7, 2020, 2020, 7 :279-301
[6]   Automated Design Space Exploration for Optimized Deployment of DNN on Arm Cortex-A CPUs [J].
de Prado, Miguel ;
Mundy, Andrew ;
Saeed, Rabia ;
Denna, Maurizo ;
Pazos, Nuria ;
Benini, Luca .
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (11) :2293-2305
[7]  
Hasselt H. V., 2010, Advances in Neural Information Processing Systems, V23, P2613
[8]   Energy-Efficient Scheduling for Real-Time Systems Based on Deep Q-Learning Mode [J].
Zhang, Qingchen ;
Lin, Man ;
Yang, Laurence T. ;
Chen, Zhikui ;
Li, Peng .
IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2019, 4 (01) :132-141
[9]  
Jevtic D, 2006, LECT NOTES ARTIF INT, V4251, P284
[10]   A New Energy Prediction Algorithm for Energy-Harvesting Wireless Sensor Networks With Q-Learning [J].
Kosunalp, Selahattin .
IEEE ACCESS, 2016, 4 :5755-5763