Spatial-Temporal Cross-View Contrastive Pre-Training for Check-in Sequence Representation Learning

被引:0
|
作者
Gong, Letian [1 ,2 ]
Wan, Huaiyu [1 ,2 ]
Guo, Shengnan [1 ,2 ]
Li, Xiucheng [3 ]
Lin, Yan [1 ,2 ]
Zheng, Erwen [1 ,2 ]
Wang, Tianyi [1 ,2 ]
Zhou, Zeyu [1 ,2 ]
Lin, Youfang [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Key Lab Big Data & Artificial Intelligence Transpo, Minist Educ, Beijing 100044, Peoples R China
[2] CAAC, Key Lab Intelligent Passenger Serv Civil Aviat, Beijing 101318, Peoples R China
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Semantics; Trajectory; Predictive models; Uncertainty; Task analysis; Noise; Data mining; Check-in sequence; contrastive cluster; representation learning; spatial-temporal cross-view;
D O I
10.1109/TKDE.2024.3434565
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid growth of location-based services (LBS) has yielded massive amounts of data on human mobility. Effectively extracting meaningful representations for user-generated check-in sequences is pivotal for facilitating various downstream services. However, the user-generated check-in data are simultaneously influenced by the surrounding objective circumstances and the user's subjective intention. Specifically, the temporal uncertainty and spatial diversity exhibited in check-in data make it difficult to capture the macroscopic spatial-temporal patterns of users and to understand the semantics of user mobility activities. Furthermore, the distinct characteristics of the temporal and spatial information in check-in sequences call for an effective fusion method to incorporate these two types of information. In this paper, we propose a novel Spatial-Temporal Cross-view Contrastive Representation (STCCR) framework for check-in sequence representation learning. Specifically, STCCR addresses the above challenges by employing self-supervision from "spatial topic" and "temporal intention" views, facilitating effective fusion of spatial and temporal information at the semantic level. Besides, STCCR leverages contrastive clustering to uncover users' shared spatial topics from diverse mobility activities, while employing angular momentum contrast to mitigate the impact of temporal uncertainty and noise. We extensively evaluate STCCR on three real-world datasets and demonstrate its superior performance across three downstream tasks.
引用
收藏
页码:9308 / 9321
页数:14
相关论文
共 6 条
  • [1] Pre-Training Time-Aware Location Embeddings from Spatial-Temporal Trajectories
    Wan, Huaiyu
    Lin, Yan
    Guo, Shengnan
    Lin, Youfang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (11) : 5510 - 5523
  • [2] Partially View-Aligned Representation Learning via Cross-View Graph Contrastive Network
    Wang, Yiming
    Chang, Dongxia
    Fu, Zhiqiang
    Wen, Jie
    Zhao, Yao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 7272 - 7283
  • [3] Nonparametric Clustering-Guided Cross-View Contrastive Learning for Partially View-Aligned Representation Learning
    Qian, Shengsheng
    Xue, Dizhan
    Hu, Jun
    Zhang, Huaiwen
    Xu, Changsheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6158 - 6172
  • [4] Cross-view contrastive representation learning approach to predicting DTIs via integrating multi-source information
    He, Chengxin
    Qu, Yuening
    Yin, Jin
    Zhao, Zhenjiang
    Ma, Runze
    Duan, Lei
    METHODS, 2023, 218 : 176 - 188
  • [5] Learning Depth Representation From RGB-D Videos by Time-Aware Contrastive Pre-Training
    He, Zongtao
    Wang, Liuyi
    Dang, Ronghao
    Li, Shu
    Yan, Qingqing
    Liu, Chengju
    Chen, Qijun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4143 - 4158
  • [6] ClusterE-ZSL: A Novel Cluster-Based Embedding for Enhanced Zero-Shot Learning in Contrastive Pre-Training Cross-Modal Retrieval
    Tariq, Umair
    Hu, Zonghai
    Tasneem, Khawaja Tauseef
    Bin Heyat, Md Belal
    Iqbal, Muhammad Shahid
    Aziz, Kamran
    IEEE ACCESS, 2024, 12 : 162622 - 162637