Intrinsically motivated reinforcement learning based recommendation with counterfactual data augmentation

被引:3
作者
Chen, Xiaocong [1 ]
Wang, Siyu [1 ]
Qi, Lianyong [2 ]
Li, Yong [3 ]
Yao, Lina [1 ,4 ]
机构
[1] Univ New South Wales, Sch Comp Sci & Engn, Sydney, NSW 2052, Australia
[2] China Univ Petr East China, Coll Comp Sci & Technol, Dong Ying Shi, Peoples R China
[3] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[4] CSIRO, Data 61, Eveleigh, NSW 2015, Australia
来源
WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS | 2023年 / 26卷 / 05期
关键词
Recommender systems; Deep reinforcement learning; Counterfactual reasoning; CAPACITY;
D O I
10.1007/s11280-023-01187-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep reinforcement learning (DRL) has shown promising results in modeling dynamic user preferences in RS in recent literature. However, training a DRL agent in the sparse RS environment poses a significant challenge. This is because the agent must balance between exploring informative user-item interaction trajectories and using existing trajectories for policy learning, a known exploration and exploitation trade-off. This trade-off greatly affects the recommendation performance when the environment is sparse. In DRL-based RS, balancing exploration and exploitation is even more challenging as the agent needs to deeply explore informative trajectories and efficiently exploit them in the context of RS. To address this issue, we propose a novel intrinsically motivated reinforcement learning (IMRL) method that enhances the agent's capability to explore informative interaction trajectories in the sparse environment. We further enrich these trajectories via an adaptive counterfactual augmentation strategy with a customised threshold to improve their efficiency in exploitation. Our approach is evaluated on six offline datasets and three online simulation platforms, demonstrating its superiority over existing state-of-the-art methods. The extensive experiments show that our IMRL method outperforms other methods in terms of recommendation performance in the sparse RS environment.
引用
收藏
页码:3253 / 3274
页数:22
相关论文
共 50 条
[31]   AUTOMATIC DATA AUGMENTATION VIA DEEP REINFORCEMENT LEARNING FOR EFFECTIVE KIDNEY TUMOR SEGMENTATION [J].
Qin, Tiexin ;
Wang, Ziyuan ;
He, Kelei ;
Shi, Yinghuan ;
Gao, Yang ;
Shen, Dinggang .
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, :1419-1423
[32]   GNN-based deep reinforcement learning for MBD product model recommendation [J].
Hu, Yuying ;
Sheng, Zewen ;
Ye, Min ;
Zhang, Meiyu ;
Jian, Chengfeng .
INTERNATIONAL JOURNAL OF COMPUTER INTEGRATED MANUFACTURING, 2024, 37 (1-2) :183-197
[33]   CIPPO: Contrastive Imitation Proximal Policy Optimization for Recommendation Based on Reinforcement Learning [J].
Chen, Weilong ;
Zhang, Shaoliang ;
Xie, Ruobing ;
Xia, Feng ;
Lin, Leyu ;
Zhang, Xinran ;
Wang, Yan ;
Zhang, Yanru .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) :5753-5767
[34]   Towards a Reinforcement Learning-based Exploratory Search for Mashup Tag Recommendation [J].
Anarfi, Ricahrd ;
Kwapong, Benjamin ;
Fletcher, Kenneth K. .
2021 IEEE INTERNATIONAL CONFERENCE ON SMART DATA SERVICES (SMDS 2021), 2021, :8-17
[35]   Cooperation Skill Motivated Reinforcement Learning for Traffic Signal Control [J].
Xin, Jie ;
Zeng, Jing ;
Cong, Ya ;
Jiang, Weihao ;
Pu, Shiliang .
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
[36]   Counterfactual Evolutionary Reasoning for Virtual Driver Reinforcement Learning in Safe Driving [J].
Ye, Peijun ;
Qi, Hao ;
Zhu, Fenghua ;
Lv, Yisheng .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (12) :4696-4705
[37]   Reinforcement Learning and Counterfactual Reasoning Explain Adaptive Behavior in a Changing Environment [J].
Zhang, Yunfeng ;
Paik, Jaehyon ;
Pirolli, Peter .
TOPICS IN COGNITIVE SCIENCE, 2015, 7 (02) :368-381
[38]   Cooperative Multi-Agent Deep Reinforcement Learning with Counterfactual Reward [J].
Shao, Kun ;
Zhu, Yuanheng ;
Tang, Zhentao ;
Zhao, Dongbin .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[39]   Unified Intrinsically Motivated Exploration for Off-Policy Learning in Continuous Action Spaces [J].
Saglam, Baturay ;
Mutlu, Furkan B. ;
Dalmaz, Onat ;
Kozat, Suleyman S. .
2022 30TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU, 2022,
[40]   Disentangled variational auto-encoder enhanced by counterfactual data for debiasing recommendation [J].
Guo, Yupu ;
Cai, Fei ;
Zheng, Jianming ;
Zhang, Xin ;
Chen, Honghui .
COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (02) :3119-3132