SanMove: next location recommendation via self-attention network

被引:1
作者
Wang, Bin [1 ]
Li, Huifeng [1 ]
Tong, Le [1 ]
Zhang, Qian [1 ]
Zhu, Sulei [1 ]
Yang, Tao [2 ]
机构
[1] Shanghai Normal Univ, Shanghai, Peoples R China
[2] Shanghai Urban & Rural Construct & Traff Dev Res I, Shanghai, Peoples R China
关键词
Next location prediction; Self-attention network; Auxiliary information; PREDICTION;
D O I
10.1108/DTA-03-2022-0093
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
PurposeThis paper aims to address the following issues: (1) most existing methods are based on recurrent network, which is time-consuming to train long sequences due to not allowing for full parallelism; (2) personalized preference generally are not considered reasonably; (3) existing methods rarely systematically studied how to efficiently utilize various auxiliary information (e.g. user ID and time stamp) in trajectory data and the spatiotemporal relations among nonconsecutive locations.Design/methodology/approachThe authors propose a novel self-attention network-based model named SanMove to predict the next location via capturing the long- and short-term mobility patterns of users. Specifically, SanMove uses a self-attention module to capture each user's long-term preference, which can represent her personalized location preference. Meanwhile, the authors use a spatial-temporal guided noninvasive self-attention (STNOVA) module to exploit auxiliary information in the trajectory data to learn the user's short-term preference.FindingsThe authors evaluate SanMove on two real-world datasets. The experimental results demonstrate that SanMove is not only faster than the state-of-the-art recurrent neural network (RNN) based predict model but also outperforms the baselines for next location prediction.Originality/valueThe authors propose a self-attention-based sequential model named SanMove to predict the user's trajectory, which comprised long-term and short-term preference learning modules. SanMove allows full parallel processing of trajectories to improve processing efficiency. They propose an STNOVA module to capture the sequential transitions of current trajectories. Moreover, the self-attention module is used to process historical trajectory sequences in order to capture the personalized location preference of each user. The authors conduct extensive experiments on two check-in datasets. The experimental results demonstrate that the model has a fast training speed and excellent performance compared with the existing RNN-based methods for next location prediction.
引用
收藏
页码:330 / 343
页数:14
相关论文
共 50 条
  • [41] Ultrasonic Rough Crack Characterization Using Time-of-Flight Diffraction With Self-Attention Neural Network
    Wang, Zhengjun
    Shi, Fan
    Ding, Junhao
    Song, Xu
    IEEE TRANSACTIONS ON ULTRASONICS FERROELECTRICS AND FREQUENCY CONTROL, 2024, 71 (10) : 1289 - 1301
  • [42] Self-Attention Deep Image Prior Network for Unsupervised 3-D Seismic Data Enhancement
    Saad, Omar M.
    Oboue, Yapo Abole Serge Innocent
    Bai, Min
    Samy, Lotfy
    Yang, Liuqing
    Chen, Yangkang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [43] iAMP-CRA: Identifying Antimicrobial Peptides Using Convolutional Recurrent Neural Network with Self-Attention
    Lu, Jingyao
    He, Yang
    Han, Guosheng
    Zeng, Li
    HEALTH INFORMATION SCIENCE AND SYSTEMS, 2025, 13 (01):
  • [44] Multi-head self-attention mechanism enabled individualized hemoglobin prediction and treatment recommendation systems in anemia management for hemodialysis patients
    Yang, Ju-Yeh
    Lee, Tsung-Chun
    Liao, Wo -Ting
    Hsu, Chih-Chung
    HELIYON, 2023, 9 (02)
  • [45] Self-attention Enhanced Patient Journey Understanding in Healthcare System
    Peng, Xueping
    Long, Guodong
    Shen, Tao
    Wang, Sen
    Jiang, Jing
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT III, 2021, 12459 : 719 - 735
  • [46] A PM2.5 spatiotemporal prediction model based on mixed graph convolutional GRU and self-attention network
    Zhao, Guyu
    Yang, Xiaoyuan
    Shi, Jiansen
    He, Hongdou
    Wang, Qian
    ENVIRONMENTAL POLLUTION, 2025, 368
  • [47] Assessment of urban air quality from Twitter communication using self-attention network and a multilayer classification model
    Kumbalaparambi, Thushara Sudheish
    Menon, Ratish
    Radhakrishnan, Vishnu P.
    Nair, Vinod P.
    ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH, 2023, 30 (04) : 10414 - 10425
  • [48] Development and validation of a self-attention network-based algorithm to detect mediastinal lesions on computed tomography images
    Wu, Sizhu
    Liu, Shengyu
    Zhong, Ming
    Loos, Erik R. de
    Hartert, Marc
    Fuentes-Martin, Alvaro
    Lenzini, Alessandra
    Wang, Dejian
    Qian, Qing
    JOURNAL OF THORACIC DISEASE, 2024, 16 (05) : 3306 - 3316
  • [49] Machinery Prognostics and High-Dimensional Data Feature Extraction Based on a Transformer Self-Attention Transfer Network
    Sun, Shilong
    Peng, Tengyi
    Huang, Haodong
    SENSORS, 2023, 23 (22)
  • [50] SPEECH ENHANCEMENT USING SELF-ADAPTATION AND MULTI-HEAD SELF-ATTENTION
    Koizumi, Yuma
    Yatabe, Kohei
    Delcroix, Marc
    Masuyama, Yoshiki
    Takeuchi, Daiki
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 181 - 185