Transformer-Based Multi-task Learning for Queuing Time Aware Next POI Recommendation

被引:16
|
作者
Halder, Sajal [1 ]
Lim, Kwan Hui [2 ]
Chan, Jeffrey [1 ]
Zhang, Xiuzhen [1 ]
机构
[1] RMIT Univ, Sch Comp Technol, Melbourne, Vic, Australia
[2] Singapore Univ Technol & Design, Singapore, Singapore
来源
ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2021, PT II | 2021年 / 12713卷
关键词
Points of Interest (POI); POI Recommendation; Transformer; Multi-tasking; Multi-head attention; Queuing time;
D O I
10.1007/978-3-030-75765-6_41
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Next point-of-interest (POI) recommendation is an important and challenging problem due to different contextual information and wide variety in human mobility patterns. Most of the prior studies incorporated user travel spatiotemporal andsequential patterns to recommend next POIs. However, few of these previous approaches considered the queuing time at POIs and its influence on user's mobility. The queuing time plays a significant role in affecting user mobility behaviour, e.g., having to queue a long time to enter a POI might reduce visitor's enjoyment. Recently, attention based recurrent neural networks-based approaches show promising performance in next POI recommendation but they are limited to single head attention which can have difficulty finding the appropriate complex connections between users, previous travel history and POI information. In this research, we present a problem of queuing time aware next POI recommendation and demonstrate how it is non-trivial to both recommend a next POI and simultaneously predict its queuing time. To solve this problem, we propose a multi-task, multi head attention transformer model called TLR-M. The model recommends next POIs to the target users and predicts queuing time to access the POIs simultaneously. By utilizing multi-head attention, the TLR-M model can integrate long range dependencies between any two POI visit efficiently and evaluate their contribution to select next POIs and to predict queuing time. Extensive experiments on eight real datasets show that the proposed model outperforms than the state-of-the-art baseline approaches in terms of precision, recall and F1 score evaluation metrics. The model also predicts and minimizes the queuing time effectively.
引用
收藏
页码:510 / 523
页数:14
相关论文
共 50 条
  • [1] ImNext: Irregular Interval Attention and Multi-task Learning for Next POI Recommendation
    He, Xi
    He, Weikang
    Liu, Yilin
    Lu, Xingyu
    Xiao, Yunpeng
    Liu, Yanbing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [2] Hierarchical Multi-Task Graph Recurrent Network for Next POI Recommendation
    Lim, Nicholas
    Hooi, Bryan
    Ng, See-Kiong
    Goh, Yong Liang
    Weng, Renrong
    Tan, Rui
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 1133 - 1143
  • [3] An Interactive Multi-Task Learning Framework for Next POI Recommendation with Uncertain Check-ins
    Zhang, Lu
    Sun, Zhu
    Zhang, Jie
    Lei, Yu
    Li, Chen
    Wu, Ziqing
    Kloeden, Horst
    Klanner, Felix
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3551 - 3557
  • [4] Predicting Outcomes for Cancer Patients with Transformer-Based Multi-task Learning
    Gerrard, Leah
    Peng, Xueping
    Clarke, Allison
    Schlegel, Clement
    Jiang, Jing
    AI 2021: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13151 : 381 - 392
  • [5] Multi-Task Learning with Personalized Transformer for Review Recommendation
    Wang, Haiming
    Liu, Wei
    Yin, Jian
    WEB INFORMATION SYSTEMS ENGINEERING - WISE 2021, PT II, 2021, 13081 : 162 - 176
  • [6] A multi-task embedding based personalized POI recommendation method
    Ling Chen
    Yuankai Ying
    Dandan Lyu
    Shanshan Yu
    Gencai Chen
    CCF Transactions on Pervasive Computing and Interaction, 2021, 3 : 253 - 269
  • [7] A multi-task embedding based personalized POI recommendation method
    Chen, Ling
    Ying, Yuankai
    Lyu, Dandan
    Yu, Shanshan
    Chen, Gencai
    CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2021, 3 (03) : 253 - 269
  • [8] Multi-task Active Learning for Pre-trained Transformer-based Models
    Rotman, Guy
    Reichart, Roi
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 1209 - 1228
  • [9] HTML']HTML: Hierarchical Transformer-based Multi-task Learning for Volatility Prediction
    Yang, Linyi
    Ng, Tin Lok James
    Smyth, Barry
    Dong, Riuhai
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 441 - 451
  • [10] Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition
    Park, Sunchan
    Kim, Hyung Soon
    JOURNAL OF THE ACOUSTICAL SOCIETY OF KOREA, 2021, 40 (05): : 515 - 522