Deep reinforcement learning based optimization of automated guided vehicle time and energy consumption in a container terminal

被引:36
作者
Drungilas, Darius [1 ]
Kurmis, Mindaugas [1 ]
Senulis, Audrius [1 ]
Lukosius, Zydrunas [1 ]
Andziulis, Arunas [1 ]
Januteniene, Jolanta [1 ]
Bogdevicius, Marijonas [1 ]
Jankunas, Valdas [1 ]
Voznak, Miroslav [1 ,2 ]
机构
[1] Klaipeda Univ, H Manto Str 84, LT-92294 Klaipeda, Lithuania
[2] VSB Tech Univ Ostrava, Dept Telecommun, 17 Listopadu 2172-15, Ostrava 70800, Czech Republic
关键词
Automated guided vehicle (AGV); Container terminal; Energy consumption; Deep reinforcement learning; Modeling; Optimization; STRATEGIES; MANAGEMENT; EFFICIENCY; BATTERY; PORT;
D O I
10.1016/j.aej.2022.12.057
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The energy efficiency of port container terminal equipment and the reduction of CO2 emissions are among one of the biggest challenges facing every seaport in the world. The article pre-sents the modeling of the container transportation process in a terminal from the quay crane to the stack using battery-powered Automated Guided Vehicle (AGV) to estimate the energy consump-tion parameters. An AGV speed control algorithm based on Deep Reinforcement Learning (DRL) is proposed to optimize the energy consumption of container transportation. The results obtained and compared with real transportation measurements showed that the proposed DRL-based approach dynamically changing the driving speed of the AGV reduces energy consumption by 4.6%. The obtained results of the research provide the prerequisites for further research in order to find optimal strategies for autonomous vehicle movement including context awareness and infor-mation sharing with other vehicles in the terminal.(c) 2022 THE AUTHORS. Published by Elsevier BV on behalf of Faculty of Engineering, Alexandria University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).
引用
收藏
页码:397 / 407
页数:11
相关论文
共 36 条
[21]   Energy efficiency and CO2 emissions of port container terminal equipment: Evidence from the Port of Valencia [J].
Martinez-Moya, Julian ;
Vazquez-Paja, Barbara ;
Gimenez Maldonado, Jose Andres .
ENERGY POLICY, 2019, 131 :312-319
[22]  
NABU, 2013, BATT EL AGVS
[23]   Lithium iron phosphate based battery - Assessment of the aging parameters and development of cycle life model [J].
Omar, Noshin ;
Monem, Mohamed Abdel ;
Firouz, Yousef ;
Salminen, Justin ;
Smekens, Jelle ;
Hegazy, Omar ;
Gaulous, Hamid ;
Mulder, Grietus ;
Van den Bossche, Peter ;
Coosemans, Thierry ;
Van Mierlo, Joeri .
APPLIED ENERGY, 2014, 113 :1575-1585
[24]   Solving flow-shop scheduling problem with a reinforcement learning algorithm that generalizes the value function with neural network [J].
Ren, Jianfeng ;
Ye, Chunming ;
Yang, Feng .
ALEXANDRIA ENGINEERING JOURNAL, 2021, 60 (03) :2787-2800
[25]   Deep neural networks-based real-time optimal navigation for an automatic guided vehicle with static and dynamic obstacles [J].
Ren, Zhigang ;
Lai, Jialun ;
Wu, Zongze ;
Xie, Shengli .
NEUROCOMPUTING, 2021, 443 :329-344
[26]   Analysis of the Current Electric Battery Models for Electric Vehicle Simulation [J].
Saldana, Gaizka ;
Ignacio San Martin, Jose ;
Zamora, Inmaculada ;
Javier Asensio, Francisco ;
Onederra, Oier .
ENERGIES, 2019, 12 (14)
[27]   Electro-thermal analysis of Lithium Iron Phosphate battery for electric vehicles [J].
Saw, L. H. ;
Somasundaram, K. ;
Ye, Y. ;
Tay, A. A. O. .
JOURNAL OF POWER SOURCES, 2014, 249 :231-238
[28]  
Sutton RS, 2018, ADAPT COMPUT MACH LE, P1
[29]  
TImothy P.Lillicrap., 2016, CONTINUOUS CONTROL D, DOI DOI 10.48550/ARXIV.1509.02971
[30]   Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency [J].
Touzani, Samir ;
Prakash, Anand Krishnan ;
Wang, Zhe ;
Agarwal, Shreya ;
Pritoni, Marco ;
Kiran, Mariam ;
Brown, Richard ;
Granderson, Jessica .
APPLIED ENERGY, 2021, 304 (304)