Pre-Training General Trajectory Embeddings With Maximum Multi-View Entropy Coding

被引:0
|
作者
Lin, Yan [1 ,2 ]
Wan, Huaiyu [1 ,2 ]
Guo, Shengnan [1 ,2 ]
Hu, Jilin [3 ]
Jensen, Christian S.
Lin, Youfang [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp & Informat Technoloty, Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
[2] CAAC, Key Lab Intelligent Passenger Serv Civil Aviat, Beijing 101318, Peoples R China
[3] Aalborg Univ, Dept Comp Sci, DK-9220 Aalborg, Denmark
关键词
Trajectory; Task analysis; Roads; Semantics; Correlation; Data mining; Training; Maximum multi-view entropy; pre-training; self-supervised learning; spatio-temporal data mining; trajectory embedding; BROAD LEARNING-SYSTEM; ADAPTATION;
D O I
10.1109/TKDE.2023.3347513
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spatio-temporal trajectories provide valuable information about movement and travel behavior, enabling various downstream tasks that in turn power real-world applications. Learning trajectory embeddings can improve task performance but may incur high computational costs and face limited training data availability. Pre-training learns generic embeddings by means of specially constructed pretext tasks that enable learning from unlabeled data. Existing pre-training methods face (i) difficulties in learning general embeddings due to biases towards certain downstream tasks incurred by the pretext tasks, (ii) limitations in capturing both travel semantics and spatio-temporal correlations, and (iii) the complexity of long, irregularly sampled trajectories. To tackle these challenges, we propose Maximum Multi-view Trajectory Entropy Coding (MMTEC) for learning general and comprehensive trajectory embeddings. We introduce a pretext task that reduces biases in pre-trained trajectory embeddings, yielding embeddings that are useful for a wide variety of downstream tasks. We also propose an attention-based discrete encoder and a NeuralCDE-based continuous encoder that extract and represent travel behavior and continuous spatio-temporal correlations from trajectories in embeddings, respectively. Extensive experiments on two real-world datasets and three downstream tasks offer insight into the design properties of our proposal and indicate that it is capable of outperforming existing trajectory embedding methods.
引用
收藏
页码:9037 / 9050
页数:14
相关论文
共 50 条
  • [11] Spatial-Temporal Cross-View Contrastive Pre-Training for Check-in Sequence Representation Learning
    Gong, Letian
    Wan, Huaiyu
    Guo, Shengnan
    Li, Xiucheng
    Lin, Yan
    Zheng, Erwen
    Wang, Tianyi
    Zhou, Zeyu
    Lin, Youfang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 9308 - 9321
  • [12] PreTraM: Self-supervised Pre-training via Connecting Trajectory and Map
    Xu, Chenfeng
    Li, Tian
    Tang, Chen
    Sun, Lingfeng
    Keutzer, Kurt
    Tomizuka, Masayoshi
    Fathi, Alireza
    Zhan, Wei
    COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 34 - 50
  • [13] Contrastive Cross-Modal Pre-Training: A General Strategy for Small Sample Medical Imaging
    Liang, Gongbo
    Greenwell, Connor
    Zhang, Yu
    Xing, Xin
    Wang, Xiaoqin
    Kavuluru, Ramakanth
    Jacobs, Nathan
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (04) : 1640 - 1649
  • [14] MVEB: Self-Supervised Learning With Multi-View Entropy Bottleneck
    Wen, Liangjian
    Wang, Xiasi
    Liu, Jianzhuang
    Xu, Zenglin
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (09) : 6097 - 6108
  • [15] Uni4Eye++: A General Masked Image Modeling Multi-Modal Pre-Training Framework for Ophthalmic Image Classification and Segmentation
    Cai, Zhiyuan
    Lin, Li
    He, Huaqing
    Cheng, Pujin
    Tang, Xiaoying
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (12) : 4419 - 4429
  • [16] Missing Road Condition Imputation Using a Multi-View Heterogeneous Graph Network From GPS Trajectory
    Zhang, Zhiwen
    Wang, Hongjun
    Fan, Zipei
    Song, Xuan
    Shibasaki, Ryosuke
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (05) : 4917 - 4931
  • [17] GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training
    Qiu, Jiezhong
    Chen, Qibin
    Dong, Yuxiao
    Zhang, Jing
    Yang, Hongxia
    Ding, Ming
    Wang, Kuansan
    Tang, Jie
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1150 - 1160
  • [18] SEEP: Semantic-enhanced question embeddings pre-training for improving knowledge tracing
    Wang, Wentao
    Ma, Huifang
    Zhao, Yan
    Yang, Fanyi
    Chang, Liang
    INFORMATION SCIENCES, 2022, 614 : 153 - 169
  • [19] Trajectory-BERT: Trajectory Estimation Based on BERT Trajectory Pre-Training Model and Particle Filter Algorithm
    Wu, You
    Yu, Hongyi
    Du, Jianping
    Ge, Chenglong
    SENSORS, 2023, 23 (22)
  • [20] SEEP: Semantic-enhanced question embeddings pre-training for improving knowledge tracing
    Wang, Wentao
    Ma, Huifang
    Zhao, Yan
    Yang, Fanyi
    Chang, Liang
    INFORMATION SCIENCES, 2022, 614 : 153 - 169