Leveraging transfer learning in reinforcement learning to tackle competitive influence maximization

被引:0
作者
Khurshed Ali
Chih-Yu Wang
Yi-Shin Chen
机构
[1] Institute of Information Science,Taiwan International Graduate Program in Social Networks and Human
[2] Academia Sinica,Centered Computing
[3] Sukkur IBA University,Department of Computer Science
[4] Research Center for Information Technology Innovation,Institute of Information Systems and Applications (ISA)
[5] Academia Sinica,undefined
[6] National Tsing Hua University,undefined
来源
Knowledge and Information Systems | 2022年 / 64卷
关键词
Influence maximization; Transfer learning; Reinforcement learning; Social networks; Q-Learning;
D O I
暂无
中图分类号
学科分类号
摘要
Competitive influence maximization (CIM) is a key problem that seeks highly influential users to maximize the party’s reward than the competitor. Heuristic and game theory-based approaches are proposed to tackle the CIM problem. However, these approaches consider a selection of key influential users at the first round after knowing the competitor’s seed nodes. To overcome the first round seed selection, reinforcement learning (RL)-based models are proposed to tackle the competitive influence maximization allowing parties to select seed nodes in multiple rounds without explicitly knowing the competitor’s decision. Despite the successful application of RL-based models for CIM, the proposed RL-based models take extensive training time to train the model for finding an optimal strategy whenever the networks or settings of the agent change. To address the RL model’s efficiency, we extend transfer learning in reinforcement learning-based methods to reduce the training time and utilize the knowledge gained on a source network to a target network. Our objective is twofold; the first one is the appropriate state representation of the source and target networks to efficiently avail the knowledge gained on a source network to a target network. The second is to find an optimal transfer learning (TL) in the reinforcement learning method, which is more suitable to tackle the competitive influence maximization problem. We validate our proposed TL methods under two different settings of the agent. Experimental results demonstrate that our proposed TL methods achieve better or similar performance compared with the baseline model while reducing significant training time on target networks.
引用
收藏
页码:2059 / 2090
页数:31
相关论文
共 12 条
[1]  
Ali K(2019)A novel nested q-learning method to tackle time-constrained competitive influence maximization IEEE Access 7 6337-6352
[2]  
Wang CY(2003)Learning rates for q-learning J Mach Learn Res 5 1-25
[3]  
Chen YS(2004)Transfer of experience between reinforcement learning environments with progressive difficulty Artif Intell Rev 21 375-398
[4]  
Even-Dar E(2005)Value functions for RL-based behavior transfer: a comparative study Proc AAAI 5 880-885
[5]  
Mansour Y(2009)Transfer learning Handb Res Mach Learn Appl Trends: Algorithms, Methods, Techn 1 242-undefined
[6]  
Madden MG(undefined)undefined undefined undefined undefined-undefined
[7]  
Howley T(undefined)undefined undefined undefined undefined-undefined
[8]  
Taylor ME(undefined)undefined undefined undefined undefined-undefined
[9]  
Stone P(undefined)undefined undefined undefined undefined-undefined
[10]  
Liu Y(undefined)undefined undefined undefined undefined-undefined