Adversarial Constrained Bidding via Minimax Regret Optimization with Causality-Aware Reinforcement Learning

被引:0
|
作者
Wang, Haozhe [1 ]
Du, Chao [1 ]
Pang, Panyan [1 ]
He, Li [1 ]
Wang, Liang [1 ]
Zheng, Bo [1 ]
机构
[1] Alibaba Grp, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023 | 2023年
关键词
Constrained Bidding; Reinforcement Learning; Causality; AUCTION;
D O I
10.1145/3580305.3599254
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The proliferation of the Internet has led to the emergence of online advertising, driven by the mechanics of online auctions. In these repeated auctions, software agents participate on behalf of aggregated advertisers to optimize for their long-term utility. To fulfill the diverse demands, bidding strategies are employed to optimize advertising objectives subject to different spending constraints. Existing approaches on constrained bidding typically rely on i.i.d. train and test conditions, which contradicts the adversarial nature of online ad markets where different parties possess potentially conflicting objectives. In this regard, we explore the problem of constrained bidding in adversarial bidding environments, which assumes no knowledge about the adversarial factors. Instead of relying on the i.i.d. assumption, our insight is to align the train distribution of environments with the potential test distribution meanwhile minimizing policy regret. Based on this insight, we propose a practical Minimax Regret Optimization (MiRO) approach that interleaves between a teacher finding adversarial environments for tutoring and a learner meta-learning its policy over the given distribution of environments. In addition, we pioneer to incorporate expert demonstrations for learning bidding strategies. Through a causality-aware policy design, we improve upon MiRO by distilling knowledge from the experts. Extensive experiments on both industrial data and synthetic data show that our method, MiRO with Causality-aware reinforcement Learning (MiROCL), outperforms prior methods by over 30%.
引用
收藏
页码:2314 / 2325
页数:12
相关论文
共 50 条
  • [31] Market Making Strategy Optimization via Deep Reinforcement Learning
    Sun, Tianyuan
    Huang, Dechun
    Yu, Jie
    IEEE ACCESS, 2022, 10 : 9085 - 9093
  • [32] Dynamical Hyperparameter Optimization via Deep Reinforcement Learning in Tracking
    Dong, Xingping
    Shen, Jianbing
    Wang, Wenguan
    Shao, Ling
    Ling, Haibin
    Porikli, Fatih
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (05) : 1515 - 1529
  • [33] Chiron: A Robustness-Aware Incentive Scheme for Edge Learning via Hierarchical Reinforcement Learning
    Liu, Yi
    Guo, Song
    Zhan, Yufeng
    Wu, Leijie
    Hong, Zicong
    Zhou, Qihua
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (08) : 8508 - 8524
  • [34] Closing the Dynamics Gap via Adversarial and Reinforcement Learning for High-Speed Racing
    Niu, Jingyu
    Hu, Yu
    Li, Wei
    Huang, Guangyan
    Han, Yinhe
    Li, Xiaowei
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [35] RL-VAEGAN: Adversarial defense for reinforcement learning agents via style transfer
    Hu, Yueyue
    Sun, Shiliang
    KNOWLEDGE-BASED SYSTEMS, 2021, 221
  • [36] TOTAL: Topology Optimization of Operational Amplifier via Reinforcement Learning
    Chen, Zihao
    Meng, Songlei
    Yang, Fan
    Shang, Li
    Zeng, Xuan
    2023 24TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED, 2023, : 414 - 421
  • [37] NetRL: Task-Aware Network Denoising via Deep Reinforcement Learning
    Xu, Jiarong
    Yang, Yang
    Pu, Shiliang
    Fu, Yao
    Feng, Jun
    Jiang, Weihao
    Lu, Jiangang
    Wang, Chunping
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 810 - 823
  • [38] Static Neural Compiler Optimization via Deep Reinforcement Learning
    Mammadli, Rahim
    Jannesari, Ali
    Wolf, Felix
    PROCEEDINGS OF SIXTH WORKSHOP ON THE LLVM COMPILER INFRASTRUCTURE IN HPC AND WORKSHOP ON HIERARCHICAL PARALLELISM FOR EXASCALE COMPUTING (LLVM-HPC2020 AND HIPAR 2020), 2020, : 1 - 11
  • [39] SEQUENCE-TO-SEQUENCE ASR OPTIMIZATION VIA REINFORCEMENT LEARNING
    Tjandra, Andros
    Sakti, Sakriani
    Nakamura, Satoshi
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5829 - 5833
  • [40] Optimization of anemia treatment in hemodialysis patients via reinforcement learning
    Escandell-Montero, Pablo
    Chermisi, Milena
    Martinez-Martinez, Jose M.
    Gomez-Sanchis, Juan
    Barbieri, Carlo
    Soria-Olivas, Emilio
    Mari, Flavio
    Vila-Frances, Joan
    Stopper, Andrea
    Gatti, Emanuele
    Martin-Guerrero, Jose D.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2014, 62 (01) : 47 - 60