Q-learning Algorithm for Energy Management in Solar Powered Embedded Monitoring Systems

被引:9
作者
Prauzek, Michal [1 ]
Mourcet, Nicolas R. A. [2 ]
Hlavica, Jakub [1 ]
Musilek, Petr [3 ]
机构
[1] Tech Univ Ostrava, Fac Elect Engn & Comp Sci, Ostrava, Czech Republic
[2] Polytech Grenoble, Dept Ind Comp & Instrumentat, Grenoble, France
[3] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB, Canada
来源
2018 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC) | 2018年
关键词
energy management; solar power harvesting; reinforcement learning; remote environmental monitoring systems; data loggers;
D O I
10.1109/CEC.2018.8477781
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Environmental changes have become a considerable issue over the past years. To be able to continuously monitor such variations in the environment, it is convenient to employ so-called Environmental Monitoring Systems (EMS) that can be deployed in remote places of interest, and that are capable of measuring multiple ambient characteristics. However, supplying EMS with power in a long term to prevent blackouts is challenging. This paper introduces an energy management method based on Reinforcement Learning algorithm, particularly Q-learning. The EMS described in this study needs to meet a set of strict requirements, e.g. low power consumption, high reliability, self-sustainability with respect to power supply, backup of data acquired through the measurements in case of an unexpected failure and many others. The energy is harvested using solar panels and stored in supercapacitors. In addition, the implementation of a complex algorithm is not suitable for such a system, considering the energy constraints. The solution to the above mentioned challenges is described in this study. The findings were implemented in a physical EMS device, and subsequently tested in field so as to acquire real-life data.
引用
收藏
页码:1068 / 1074
页数:7
相关论文
共 16 条
  • [1] Hsu RC, 2015, IEEE INTL CONF IND I, P116, DOI 10.1109/INDIN.2015.7281720
  • [2] A Reinforcement Learning-Based ToD Provisioning Dynamic Power Management for Sustainable Operation of Energy Harvesting Wireless Sensor Node
    Hsu, Roy Chaoming
    Liu, Cheng-Ting
    Wang, Hao-Li
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2014, 2 (02) : 181 - 191
  • [3] Kaliappan AT, 2013, IEEE CONF CLEAN ENER, P342, DOI 10.1109/CEAT.2013.6775653
  • [4] Kianpisheh S., 2011, 2011 International Symposium on Computer Networks and Distributed Systems (CNDS), P158, DOI 10.1109/CNDS.2011.5764564
  • [5] Reinforcement learning for microgrid energy management
    Kuznetsova, Elizaveta
    Li, Yan-Fu
    Ruiz, Carlos
    Zio, Enrico
    Ault, Graham
    Bell, Keith
    [J]. ENERGY, 2013, 59 : 133 - 146
  • [6] Leo R., 2014, 2014 IEEE Global Humanitarian Technology Conference - South Asia Satellite (GHTC-SAS), P183, DOI 10.1109/GHTC-SAS.2014.6967580
  • [7] Li X., 2012, TELKOMNIKA INDONESIA, V10, P1956
  • [8] Li X, 2012, CHIN CONTR CONF, P6924
  • [9] Liu CT, 2011, IEEE IND ELEC, P2365
  • [10] A Comparative Study of Policies in Q-Learning for Foraging Tasks
    Mohan, Yogeswaran
    Ponnambalam, S. G.
    Inayat-Hussain, Jawaid I.
    [J]. 2009 WORLD CONGRESS ON NATURE & BIOLOGICALLY INSPIRED COMPUTING (NABIC 2009), 2009, : 134 - +