共 50 条
- [31] 2nd Workshop on Multi-Armed Bandits and Reinforcement Learning: Advancing Decision Making in E-Commerce and Beyond PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 5890 - 5890
- [36] MULTI-ARMED BANDITS FOR HUMAN-MACHINE DECISION MAKING 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 6986 - 6990
- [38] Regression Oracles and Exploration Strategies for Short-Horizon Multi-Armed Bandits 2020 IEEE CONFERENCE ON GAMES (IEEE COG 2020), 2020, : 312 - 319
- [40] Exploiting History Data for Nonstationary Multi-armed Bandit MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, 2021, 12975 : 51 - 66