Multi-agent reinforcement learning for electric vehicle decarbonized routing and scheduling

被引:7
|
作者
Wang, Yi [1 ]
Qiu, Dawei [1 ]
He, Yinglong [2 ]
Zhou, Quan [3 ]
Strbac, Goran [1 ]
机构
[1] Imperial Coll London, Dept Elect & Elect Engn, London SW7 2AZ, England
[2] Univ Surrey, Adv Resilient Transport Syst, Guildford GU2 7XH, England
[3] Univ Birmingham, Birmingham CASE Automot Res Ctr, Birmingham B15 2TT, England
基金
英国工程与自然科学研究理事会;
关键词
Electric vehicles; Carbon emissions; Carbon intensity; Routing and scheduling; Transport and power networks; Multi-agent reinforcement learning; COUPLED TRANSPORTATION; MODEL;
D O I
10.1016/j.energy.2023.129335
中图分类号
O414.1 [热力学];
学科分类号
摘要
Low-carbon transitions require joint efforts from electricity grid and transport network, where electric vehicles (EVs) play a key role. Particularly, EVs can reduce the carbon emissions of transport networks through eco-routing while providing the carbon intensity service for power networks via vehicle-to-grid technique. Distinguishing from previous research that focused on EV routing and scheduling problems separately, this paper studies their coordinated effect with the objective of carbon emission reduction on both sides. To solve this problem, we propose a multi-agent reinforcement learning method that does not rely on prior knowledge of the system and can adapt to various uncertainties and dynamics. The proposed method learns a hierarchical structure for the mutually exclusive discrete routing and continuous scheduling decisions via a hybrid policy. Extensive case studies based on a virtual 7-node 10-edge transport and 15-bus power network as well as a coupled real-world central London transport and 33-bus power network are developed to demonstrate the effectiveness of the proposed MARL method on reducing carbon emissions in transport network and providing carbon intensity service in power network.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] Deep reinforcement learning for multi-agent interaction
    Ahmed, Ibrahim H.
    Brewitt, Cillian
    Carlucho, Ignacio
    Christianos, Filippos
    Dunion, Mhairi
    Fosong, Elliot
    Garcin, Samuel
    Guo, Shangmin
    Gyevnar, Balint
    McInroe, Trevor
    Papoudakis, Georgios
    Rahman, Arrasy
    Schafer, Lukas
    Tamborski, Massimiliano
    Vecchio, Giuseppe
    Wang, Cheng
    Albrecht, Stefano, V
    AI COMMUNICATIONS, 2022, 35 (04) : 357 - 368
  • [42] Multi-Agent Reinforcement Learning with Reward Delays
    Zhang, Yuyang
    Zhang, Runyu
    Gu, Yuantao
    Li, Na
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
  • [43] Solving job scheduling problems in a resource preemption environment with multi-agent reinforcement learning
    Wang, Xiaohan
    Zhang, Lin
    Lin, Tingyu
    Zhao, Chun
    Wang, Kunyu
    Chen, Zhen
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2022, 77
  • [44] BenchMARL: Benchmarking Multi-Agent Reinforcement Learning
    Bettini, Matteo
    Prorok, Amanda
    Moens, Vincent
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [45] Community shared ES-PV system for managing electric vehicle loads via multi-agent reinforcement learning
    Talihati, Baligen
    Fu, Shiyi
    Zhang, Bowen
    Zhao, Yuqing
    Wang, Yu
    Sun, Yaojie
    APPLIED ENERGY, 2025, 380
  • [46] Multi-agent reinforcement learning for character control
    Cheng Li
    Levi Fussell
    Taku Komura
    The Visual Computer, 2021, 37 : 3115 - 3123
  • [47] Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid
    Park, Keonwoo
    Moon, Ilkyeong
    APPLIED ENERGY, 2022, 328
  • [48] A Review of Multi-Agent Reinforcement Learning Algorithms
    Liang, Jiaxin
    Miao, Haotian
    Li, Kai
    Tan, Jianheng
    Wang, Xi
    Luo, Rui
    Jiang, Yueqiu
    ELECTRONICS, 2025, 14 (04):
  • [49] Multi-agent reinforcement learning with weak ties☆
    Wang, Huan
    Zhou, Xu
    Kang, Yu
    Xue, Jian
    Yang, Chenguang
    Liu, Xiaofeng
    INFORMATION FUSION, 2025, 118
  • [50] Multi-agent reinforcement learning based textile dyeing workshop dynamic scheduling method
    He J.
    Zhang J.
    Zhang P.
    Zheng P.
    Wang M.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2023, 29 (01): : 61 - 74