Pre-Training General Trajectory Embeddings With Maximum Multi-View Entropy Coding

被引:0
|
作者
Lin, Yan [1 ,2 ]
Wan, Huaiyu [1 ,2 ]
Guo, Shengnan [1 ,2 ]
Hu, Jilin [3 ]
Jensen, Christian S.
Lin, Youfang [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp & Informat Technoloty, Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
[2] CAAC, Key Lab Intelligent Passenger Serv Civil Aviat, Beijing 101318, Peoples R China
[3] Aalborg Univ, Dept Comp Sci, DK-9220 Aalborg, Denmark
关键词
Trajectory; Task analysis; Roads; Semantics; Correlation; Data mining; Training; Maximum multi-view entropy; pre-training; self-supervised learning; spatio-temporal data mining; trajectory embedding; BROAD LEARNING-SYSTEM; ADAPTATION;
D O I
10.1109/TKDE.2023.3347513
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spatio-temporal trajectories provide valuable information about movement and travel behavior, enabling various downstream tasks that in turn power real-world applications. Learning trajectory embeddings can improve task performance but may incur high computational costs and face limited training data availability. Pre-training learns generic embeddings by means of specially constructed pretext tasks that enable learning from unlabeled data. Existing pre-training methods face (i) difficulties in learning general embeddings due to biases towards certain downstream tasks incurred by the pretext tasks, (ii) limitations in capturing both travel semantics and spatio-temporal correlations, and (iii) the complexity of long, irregularly sampled trajectories. To tackle these challenges, we propose Maximum Multi-view Trajectory Entropy Coding (MMTEC) for learning general and comprehensive trajectory embeddings. We introduce a pretext task that reduces biases in pre-trained trajectory embeddings, yielding embeddings that are useful for a wide variety of downstream tasks. We also propose an attention-based discrete encoder and a NeuralCDE-based continuous encoder that extract and represent travel behavior and continuous spatio-temporal correlations from trajectories in embeddings, respectively. Extensive experiments on two real-world datasets and three downstream tasks offer insight into the design properties of our proposal and indicate that it is capable of outperforming existing trajectory embedding methods.
引用
收藏
页码:9037 / 9050
页数:14
相关论文
共 50 条
  • [1] UniTE: A Survey and Unified Pipeline for Pre-Training Spatiotemporal Trajectory Embeddings
    Lin, Yan
    Zhou, Zeyu
    Liu, Yicheng
    Lv, Haochen
    Wen, Haomin
    Li, Tianyi
    Li, Yushuai
    Jensen, Christian S.
    Guo, Shengnan
    Lin, Youfang
    Wan, Huaiyu
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (03) : 1475 - 1494
  • [2] Pre-Training Time-Aware Location Embeddings from Spatial-Temporal Trajectories
    Wan, Huaiyu
    Lin, Yan
    Guo, Shengnan
    Lin, Youfang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (11) : 5510 - 5523
  • [3] Prediction of chemical reaction yields with large-scale multi-view pre-training
    Shi, Runhan
    Yu, Gufeng
    Huo, Xiaohong
    Yang, Yang
    JOURNAL OF CHEMINFORMATICS, 2024, 16 (01)
  • [4] Prediction of chemical reaction yields with large-scale multi-view pre-training
    Runhan Shi
    Gufeng Yu
    Xiaohong Huo
    Yang Yang
    Journal of Cheminformatics, 16
  • [5] Self-Supervised Pre-Training via Multi-View Graph Information Bottleneck for Molecular Property Prediction
    Zang, Xuan
    Zhang, Junjie
    Tang, Buzhou
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (12) : 7659 - 7669
  • [6] Boosting UAV Tracking With Voxel-Based Trajectory-Aware Pre-Training
    Li, Sihang
    Fu, Changhong
    Lu, Kunhan
    Zuo, Haobo
    Li, Yiming
    Feng, Chen
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (02) : 1133 - 1140
  • [7] GENERATIVE PRE-TRAINING FOR SPEECH WITH AUTOREGRESSIVE PREDICTIVE CODING
    Chung, Yu-An
    Glass, James
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3497 - 3501
  • [8] Dual-view Molecular Pre-training
    Zhu, Jinhua
    Xia, Yingce
    Wu, Lijun
    Xie, Shufang
    Qin, Tao
    Zhou, Wengang
    Li, Houqiang
    Liu, Tie-Yan
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 3615 - 3627
  • [9] Thyroid ultrasound diagnosis improvement via multi-view self-supervised learning and two-stage pre-training
    Wang, Jian
    Yang, Xin
    Jia, Xiaohong
    Xue, Wufeng
    Chen, Rusi
    Chen, Yanlin
    Zhu, Xiliang
    Liu, Lian
    Cao, Yan
    Zhou, Jianqiao
    Ni, Dong
    Gu, Ning
    COMPUTERS IN BIOLOGY AND MEDICINE, 2024, 171
  • [10] Spatial-Temporal Cross-View Contrastive Pre-Training for Check-in Sequence Representation Learning
    Gong, Letian
    Wan, Huaiyu
    Guo, Shengnan
    Li, Xiucheng
    Lin, Yan
    Zheng, Erwen
    Wang, Tianyi
    Zhou, Zeyu
    Lin, Youfang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 9308 - 9321