Actor-critic multi-objective reinforcement learning for non-linear utility functions

被引:5
作者
Reymond, Mathieu [1 ]
Hayes, Conor F. [2 ]
Steckelmacher, Denis [1 ]
Roijers, Diederik M. [1 ,3 ]
Nowe, Ann [1 ]
机构
[1] Vrije Univ Brussel, Brussels, Belgium
[2] Univ Galway, Galway, Ireland
[3] HU Univ Appl Sci Utrecht, Utrecht, Netherlands
关键词
Reinforcement learning; Multi-objective reinforcement learning; Non-linear utility functions; Expected scalarized return; SETS;
D O I
10.1007/s10458-023-09604-x
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a novel multi-objective reinforcement learning algorithm that successfully learns the optimal policy even for non-linear utility functions. Non-linear utility functions pose a challenge for SOTA approaches, both in terms of learning efficiency as well as the solution concept. A key insight is that, by proposing a critic that learns a multi-variate distribution over the returns, which is then combined with accumulated rewards, we can directly optimize on the utility function, even if it is non-linear. This allows us to vastly increase the range of problems that can be solved compared to those which can be handled by single-objective methods or multi-objective methods requiring linear utility functions, yet avoiding the need to learn the full Pareto front. We demonstrate our method on multiple multi-objective benchmarks, and show that it learns effectively where baseline approaches fail.
引用
收藏
页数:30
相关论文
共 50 条
  • [41] CONTROLLED SENSING AND ANOMALY DETECTION VIA SOFT ACTOR-CRITIC REINFORCEMENT LEARNING
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4198 - 4202
  • [42] SOFT ACTOR-CRITIC REINFORCEMENT LEARNING FOR ROBOTIC MANIPULATOR WITH HINDSIGHT EXPERIENCE REPLAY
    Yan, Tao
    Zhang, Wenan
    Yang, Simon X.
    Yu, Li
    INTERNATIONAL JOURNAL OF ROBOTICS & AUTOMATION, 2019, 34 (05) : 536 - 543
  • [43] Automatic collective motion tuning using actor-critic deep reinforcement learning
    Abpeikar, Shadi
    Kasmarik, Kathryn
    Garratt, Matthew
    Hunjet, Robert
    Khan, Md Mohiuddin
    Qiu, Huanneng
    SWARM AND EVOLUTIONARY COMPUTATION, 2022, 72
  • [44] Anomaly Detection Under Controlled Sensing Using Actor-Critic Reinforcement Learning
    Joseph, Geethu
    Gursoy, M. Cenk
    Varshney, Pramod K.
    PROCEEDINGS OF THE 21ST IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (IEEE SPAWC2020), 2020,
  • [45] A bounded actor-critic reinforcement learning algorithm applied to airline revenue management
    Lawhead, Ryan J.
    Gosavi, Abhijit
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2019, 82 : 252 - 262
  • [46] Reinforcement learning for automatic quadrilateral mesh generation: A soft actor-critic approach
    Pan, Jie
    Huang, Jingwei
    Cheng, Gengdong
    Zeng, Yong
    NEURAL NETWORKS, 2023, 157 : 288 - 304
  • [47] An Actor-critic Reinforcement Learning Model for Optimal Bidding in Online Display Advertising
    Yuan, Congde
    Guo, Mengzhuo
    Xiang, Chaoneng
    Wang, Shuangyang
    Song, Guoqing
    Zhang, Qingpeng
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 3604 - 3613
  • [48] Combining backpropagation with Equilibrium Propagation to improve an Actor-Critic reinforcement learning framework
    Kubo, Yoshimasa
    Chalmers, Eric
    Luczak, Artur
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2022, 16
  • [49] Dynamic spectrum access and sharing through actor-critic deep reinforcement learning
    Dong, Liang
    Qian, Yuchen
    Xing, Yuan
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2022, 2022 (01)
  • [50] Multi-objective safe reinforcement learning: the relationship between multi-objective reinforcement learning and safe reinforcement learning
    Horie, Naoto
    Matsui, Tohgoroh
    Moriyama, Koichi
    Mutoh, Atsuko
    Inuzuka, Nobuhiro
    ARTIFICIAL LIFE AND ROBOTICS, 2019, 24 (03) : 352 - 359