Using Deep Reinforcement Learning to Improve Sensor Selection in the Internet of Things

被引:1
作者
Rashtian, Hootan [1 ]
Gopalakrishnan, Sathish [1 ,2 ]
机构
[1] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC V6T 1Z4, Canada
[2] Univ British Columbia, Peter Wall Inst Adv Studies, Vancouver, BC V6T 1Z4, Canada
来源
IEEE ACCESS | 2020年 / 8卷
基金
加拿大自然科学与工程研究理事会;
关键词
Machine learning; Internet of Things; Correlation; Production facilities; Temperature sensors; Complexity theory; Temperature distribution; asynchronous advantage actor-critic networks; soft scheduling; deep reinforcement learning; soft real-time systems; ALGORITHMS;
D O I
10.1109/ACCESS.2020.2994600
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study the problem of handling timeliness and criticality trade-off when gathering data from multiple resources in complex environments. In IoT environments, where several sensors transmitting data packets - with various criticality and timeliness, the rate of data collection could be limited due to associated costs (e.g., bandwidth limitations and energy considerations). Besides, environment complexity regarding data generation could impose additional challenges to balance criticality and timeliness when gathering data. For instance, when data packets (either regarding criticality or timeliness) of two or more sensors are correlated, or there exists temporal dependency among sensors, incorporating such patterns can expose challenges to trivial policies for data gathering. Motivated by the success of the Asynchronous Advantage Actor-Critic (A3C) approach, we first mapped vanilla A3C into our problem to compare its performance in terms of <italic>criticality-weighted deadline miss ratio</italic> to the considered baselines in multiple scenarios. We observed degradation of the A3C performance in complex scenarios. Therefore, we modified the A3C network by embedding long short term memory (LSTM) to improve performance in cases that vanilla A3C could not capture repeating patterns in data streams. Simulation results show that the modified A3C reduces the criticality-weighted deadline miss ratio from 0.3 to 0.19.
引用
收藏
页码:95208 / 95222
页数:15
相关论文
共 26 条
[1]  
[Anonymous], 2016, PROC INT C MACH LEAR
[2]  
[Anonymous], 2018, REINFORCEMENT LEARNI
[3]  
[Anonymous], IEEE ACCESS
[4]  
[Anonymous], 2018, P 28 INT TEL NETW AP
[5]  
Babaeizadeh M., 2016, CoRR
[6]   Spatio-Temporal Functional Dependencies for Sensor Data Streams [J].
Charfi, Manel ;
Gripay, Yann ;
Petit, Jean-Marc .
ADVANCES IN SPATIAL AND TEMPORAL DATABASES, SSTD 2017, 2017, 10411 :182-199
[7]   Visual Sensors Hardware Platforms: A Review [J].
Costa, Daniel G. .
IEEE SENSORS JOURNAL, 2020, 20 (08) :4025-4033
[8]   An Introduction to Deep Reinforcement Learning [J].
Francois-Lavet, Vincent ;
Henderson, Peter ;
Islam, Riashat ;
Bellemare, Marc G. ;
Pineau, Joelle .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2018, 11 (3-4) :219-354
[9]  
Gu SX, 2016, PR MACH LEARN RES, V48
[10]   A Fine-Grained Performance Model of Cloud Computing Centers [J].
Khazaei, Hamzeh ;
Misic, Jelena ;
Misic, Vojislav B. .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2013, 24 (11) :2138-2147