SanMove: next location recommendation via self-attention network

被引:1
作者
Wang, Bin [1 ]
Li, Huifeng [1 ]
Tong, Le [1 ]
Zhang, Qian [1 ]
Zhu, Sulei [1 ]
Yang, Tao [2 ]
机构
[1] Shanghai Normal Univ, Shanghai, Peoples R China
[2] Shanghai Urban & Rural Construct & Traff Dev Res I, Shanghai, Peoples R China
关键词
Next location prediction; Self-attention network; Auxiliary information; PREDICTION;
D O I
10.1108/DTA-03-2022-0093
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
PurposeThis paper aims to address the following issues: (1) most existing methods are based on recurrent network, which is time-consuming to train long sequences due to not allowing for full parallelism; (2) personalized preference generally are not considered reasonably; (3) existing methods rarely systematically studied how to efficiently utilize various auxiliary information (e.g. user ID and time stamp) in trajectory data and the spatiotemporal relations among nonconsecutive locations.Design/methodology/approachThe authors propose a novel self-attention network-based model named SanMove to predict the next location via capturing the long- and short-term mobility patterns of users. Specifically, SanMove uses a self-attention module to capture each user's long-term preference, which can represent her personalized location preference. Meanwhile, the authors use a spatial-temporal guided noninvasive self-attention (STNOVA) module to exploit auxiliary information in the trajectory data to learn the user's short-term preference.FindingsThe authors evaluate SanMove on two real-world datasets. The experimental results demonstrate that SanMove is not only faster than the state-of-the-art recurrent neural network (RNN) based predict model but also outperforms the baselines for next location prediction.Originality/valueThe authors propose a self-attention-based sequential model named SanMove to predict the user's trajectory, which comprised long-term and short-term preference learning modules. SanMove allows full parallel processing of trajectories to improve processing efficiency. They propose an STNOVA module to capture the sequential transitions of current trajectories. Moreover, the self-attention module is used to process historical trajectory sequences in order to capture the personalized location preference of each user. The authors conduct extensive experiments on two check-in datasets. The experimental results demonstrate that the model has a fast training speed and excellent performance compared with the existing RNN-based methods for next location prediction.
引用
收藏
页码:330 / 343
页数:14
相关论文
共 50 条
  • [31] Interpreting Trajectories from Multiple Views: A Hierarchical Self-Attention Network for Estimating the Time of Arrival
    Chen, Zebin
    Xiao, Xiaolin
    Gong, Yue-Jiao
    Fang, Jun
    Ma, Nan
    Chai, Hua
    Cao, Zhiguang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2771 - 2779
  • [32] 3D Carbonate Digital Rock Reconstruction by Self-Attention Network and GAN Structure
    Wang, Bin
    Wang, Jiahao
    Liu, Ye
    APPLIED SCIENCES-BASEL, 2023, 13 (24):
  • [33] Industrial units modeling using self-attention network based on feature selection and pattern classification
    Wang, Luyao
    Long, Jian
    Li, Xiang Yang
    Peng, Haifei
    Ye, Zhen Cheng
    CHEMICAL ENGINEERING RESEARCH & DESIGN, 2023, 200 : 176 - 185
  • [34] Data Mining of Students' Consumption Behaviour Pattern Based on Self-Attention Graph Neural Network
    Xu, Fangyao
    Qu, Shaojie
    APPLIED SCIENCES-BASEL, 2021, 11 (22):
  • [35] Novel video anomaly detection method based on global-local self-attention network
    Yang J.
    Wu C.
    Zhou L.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (08): : 241 - 250
  • [36] DBSA-Net: Dual Branch Self-Attention Network for Underwater Acoustic Signal Denoising
    Zhou, Aolong
    Zhang, Wen
    Xu, Guojun
    Li, Xiaoyong
    Deng, Kefeng
    Song, Junqiang
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 1851 - 1865
  • [37] A semisupervised fault frequency analysis method for rotating machinery based on restricted self-attention network
    Zhang, Huaqin
    Hong, Jichao
    Yang, Haixu
    Zhang, Xinyang
    Liang, Fengwei
    Zhang, Chi
    Huang, Zhongguo
    STRUCTURAL HEALTH MONITORING-AN INTERNATIONAL JOURNAL, 2024,
  • [38] DeePhafier: a phage lifestyle classifier using a multilayer self-attention neural network combining protein information
    Miao, Yan
    Sun, Zhenyuan
    Lin, Chen
    Gu, Haoran
    Ma, Chenjing
    Liang, Yingjian
    Wang, Guohua
    BRIEFINGS IN BIOINFORMATICS, 2024, 25 (05)
  • [39] An integrated multi-head dual sparse self-attention network for remaining useful life prediction
    Zhang, Jiusi
    Li, Xiang
    Tian, Jilun
    Luo, Hao
    Yin, Shen
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2023, 233
  • [40] MSASGCN : Multi-Head Self-Attention Spatiotemporal Graph Convolutional Network for Traffic Flow Forecasting
    Cao, Yang
    Liu, Detian
    Yin, Qizheng
    Xue, Fei
    Tang, Hengliang
    JOURNAL OF ADVANCED TRANSPORTATION, 2022, 2022