Actor-critic multi-objective reinforcement learning for non-linear utility functions

被引:5
作者
Reymond, Mathieu [1 ]
Hayes, Conor F. [2 ]
Steckelmacher, Denis [1 ]
Roijers, Diederik M. [1 ,3 ]
Nowe, Ann [1 ]
机构
[1] Vrije Univ Brussel, Brussels, Belgium
[2] Univ Galway, Galway, Ireland
[3] HU Univ Appl Sci Utrecht, Utrecht, Netherlands
关键词
Reinforcement learning; Multi-objective reinforcement learning; Non-linear utility functions; Expected scalarized return; SETS;
D O I
10.1007/s10458-023-09604-x
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a novel multi-objective reinforcement learning algorithm that successfully learns the optimal policy even for non-linear utility functions. Non-linear utility functions pose a challenge for SOTA approaches, both in terms of learning efficiency as well as the solution concept. A key insight is that, by proposing a critic that learns a multi-variate distribution over the returns, which is then combined with accumulated rewards, we can directly optimize on the utility function, even if it is non-linear. This allows us to vastly increase the range of problems that can be solved compared to those which can be handled by single-objective methods or multi-objective methods requiring linear utility functions, yet avoiding the need to learn the full Pareto front. We demonstrate our method on multiple multi-objective benchmarks, and show that it learns effectively where baseline approaches fail.
引用
收藏
页数:30
相关论文
共 50 条
  • [31] Optimal Policy of Multiplayer Poker via Actor-Critic Reinforcement Learning
    Shi, Daming
    Guo, Xudong
    Liu, Yi
    Fan, Wenhui
    ENTROPY, 2022, 24 (06)
  • [32] Actor Critic-based Multi Objective Reinforcement Learning for Multi Access Edge Computing
    Khot, Vishal
    Vallisha, M.
    Pai, Sharan S.
    Shekar, Chandra R. K.
    Kayarvizhy, N.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (02) : 382 - 389
  • [33] Supervised actor-critic reinforcement learning with action feedback for algorithmic trading
    Sun, Qizhou
    Si, Yain-Whar
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16875 - 16892
  • [34] A Novel Actor-Critic Motor Reinforcement Learning for Continuum Soft Robots
    Pantoja-Garcia, Luis
    Parra-Vega, Vicente
    Garcia-Rodriguez, Rodolfo
    Vazquez-Garcia, Carlos Ernesto
    ROBOTICS, 2023, 12 (05)
  • [35] Adaptive Assist-as-needed Control Based on Actor-Critic Reinforcement Learning
    Zhang, Yufeng
    Li, Shuai
    Nolan, Karen J.
    Zanotto, Damiano
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4066 - 4071
  • [36] Dynamic spectrum access and sharing through actor-critic deep reinforcement learning
    Liang Dong
    Yuchen Qian
    Yuan Xing
    EURASIP Journal on Wireless Communications and Networking, 2022
  • [37] Scalable Online Disease Diagnosis via Multi-Model-Fused Actor-Critic Reinforcement Learning
    He, Weijie
    Chen, Ting
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 4695 - 4703
  • [38] Entropy regularized actor-critic based multi-agent deep reinforcement learning for stochastic games
    Hao, Dong
    Zhang, Dongcheng
    Shi, Qi
    Li, Kai
    Information Sciences, 2022, 617 : 17 - 40
  • [39] SAC-FACT: Soft Actor-Critic Reinforcement Learning for Counterfactual Explanations
    Ezzeddine, Fatima
    Ayoub, Omran
    Andreoletti, Davide
    Giordano, Silvia
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT I, 2023, 1901 : 195 - 216
  • [40] IMPROVING ACTOR-CRITIC REINFORCEMENT LEARNING VIA HAMILTONIAN MONTE CARLO METHOD
    Xu, Duo
    Fekri, Faramarz
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4018 - 4022