Deep recurrent Q-network algorithm for carbon emission allowance trading strategy

被引:0
|
作者
Wu, Chao [1 ]
Bi, Wenjie [2 ]
Liu, Haiying [3 ]
机构
[1] South China Univ Technol, Sch Business Adm, Guangzhou 510640, Peoples R China
[2] Cent South Univ, Business Sch, Changsha 410083, Peoples R China
[3] Hunan Univ Finance & Econ, Accounting Sch, Changsha 410083, Peoples R China
基金
中国国家自然科学基金;
关键词
Global warming; Carbon trading markets; Deep reinforcement learning; Deep recurrent Q -Network; Algorithm trading; CHINA;
D O I
10.1016/j.jenvman.2024.123308
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Against the backdrop of global warming, the carbon trading market is considered as an effective means of emission reduction. With more and more companies and individuals participating in carbon markets for trading, it is of great theoretical and practical significance to help them automatically identify carbon trading investment opportunities and achieve intelligent carbon trading decisions. Based on the characteristics of the carbon trading market, we propose a novel deep reinforcement learning (DRL) trading strategy - Deep Recurrent Q-Network (DRQN). The experimental results show that the carbon allowance trading model based on the DRQN algorithm can provide optimal trading strategies and adapt to market changes. Specifically, the annualized returns for the DRQN algorithm strategy in the Guangdong (GD) and Hubei (HB) carbon markets are 15.43% and 34.75%, respectively, significantly outperforming other strategies. To better meet the needs of the actual implementation scenarios of the model, we analyze the impacts of discount factors and trading costs. The research results indicate that discount factors can provide participants with clearer expectations. In both carbon markets (GD and HB), there exists an optimal discount factor value of 0.4, as both excessively small or large values can have adverse effects on trading. Simultaneously, the government can ensure the fairness of carbon trading by regulating the costs of carbon trading to limit the speculative behavior of participants.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Deep Recurrent Q-Network with Truncated History
    Oh, Hyunwoo
    Kaneko, Tomoyuki
    2018 CONFERENCE ON TECHNOLOGIES AND APPLICATIONS OF ARTIFICIAL INTELLIGENCE (TAAI), 2018, : 34 - 39
  • [2] Deep Deformable Q-Network: An Extension of Deep Q-Network
    Jin, Beibei
    Yang, Jianing
    Huang, Xiangsheng
    Khan, Dawar
    2017 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE (WI 2017), 2017, : 963 - 966
  • [3] Deep Reinforcement Learning Pairs Trading with a Double Deep Q-Network
    Brim, Andrew
    2020 10TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2020, : 222 - 227
  • [4] Stock trading rule discovery with double deep Q-network
    Shi, Yong
    Li, Wei
    Zhu, Luyao
    Guo, Kun
    Cambria, Erik
    APPLIED SOFT COMPUTING, 2021, 107
  • [5] The optimal dispatching strategy of cogeneration based on Deep Q-Network (DQN) algorithm
    Zhang, Pei
    Fu, Yan
    Yao, Fu
    SCIENCE AND TECHNOLOGY FOR ENERGY TRANSITION, 2024, 79
  • [6] DADE-DQN: Dual Action and Dual Environment Deep Q-Network for Enhancing Stock Trading Strategy
    Huang, Yuling
    Lu, Xiaoping
    Zhou, Chujin
    Song, Yunlin
    MATHEMATICS, 2023, 11 (17)
  • [7] A New Feature Selection Algorithm Based on Deep Q-Network
    Li, Xinqian
    Yao, Jie
    Ren, Jia
    Wang, Liqiang
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 7100 - 7105
  • [8] Improved Deep Recurrent Q-Network of POMDPs for Automated Penetration Testing
    Zhang, Yue
    Liu, Jingju
    Zhou, Shicheng
    Hou, Dongdong
    Zhong, Xiaofeng
    Lu, Canju
    APPLIED SCIENCES-BASEL, 2022, 12 (20):
  • [9] Spatio-Temporal Attention Deep Recurrent Q-Network for POMDPs
    Etchart, Mariano
    Ladosz, Pawel
    Mulvaney, David
    PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2019, PT I, 2019, 11804 : 98 - 105
  • [10] Deep Recurrent Q-Network Methods for mmWave Beam Tracking systems
    Park, Juseong
    Hwang, Sangwon
    Lee, Hoon
    Lee, Inkyu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (12) : 13429 - 13434