Pre-Training General Trajectory Embeddings With Maximum Multi-View Entropy Coding

被引:0
作者
Lin, Yan [1 ,2 ]
Wan, Huaiyu [1 ,2 ]
Guo, Shengnan [1 ,2 ]
Hu, Jilin [3 ]
Jensen, Christian S.
Lin, Youfang [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp & Informat Technoloty, Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
[2] CAAC, Key Lab Intelligent Passenger Serv Civil Aviat, Beijing 101318, Peoples R China
[3] Aalborg Univ, Dept Comp Sci, DK-9220 Aalborg, Denmark
关键词
Trajectory; Task analysis; Roads; Semantics; Correlation; Data mining; Training; Maximum multi-view entropy; pre-training; self-supervised learning; spatio-temporal data mining; trajectory embedding; BROAD LEARNING-SYSTEM; ADAPTATION;
D O I
10.1109/TKDE.2023.3347513
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spatio-temporal trajectories provide valuable information about movement and travel behavior, enabling various downstream tasks that in turn power real-world applications. Learning trajectory embeddings can improve task performance but may incur high computational costs and face limited training data availability. Pre-training learns generic embeddings by means of specially constructed pretext tasks that enable learning from unlabeled data. Existing pre-training methods face (i) difficulties in learning general embeddings due to biases towards certain downstream tasks incurred by the pretext tasks, (ii) limitations in capturing both travel semantics and spatio-temporal correlations, and (iii) the complexity of long, irregularly sampled trajectories. To tackle these challenges, we propose Maximum Multi-view Trajectory Entropy Coding (MMTEC) for learning general and comprehensive trajectory embeddings. We introduce a pretext task that reduces biases in pre-trained trajectory embeddings, yielding embeddings that are useful for a wide variety of downstream tasks. We also propose an attention-based discrete encoder and a NeuralCDE-based continuous encoder that extract and represent travel behavior and continuous spatio-temporal correlations from trajectories in embeddings, respectively. Extensive experiments on two real-world datasets and three downstream tasks offer insight into the design properties of our proposal and indicate that it is capable of outperforming existing trajectory embedding methods.
引用
收藏
页码:9037 / 9050
页数:14
相关论文
共 50 条
  • [21] Multi-View Masked Autoencoder for General Image Representation
    Ji, Seungbin
    Han, Sangkwon
    Rhee, Jongtae
    APPLIED SCIENCES-BASEL, 2023, 13 (22):
  • [22] Impact of Pre-training Datasets on Human Activity Recognition with Contrastive Predictive Coding
    da Silva, Betania E. R.
    Napoli, Otavio O.
    Delgado, J. V.
    Rocha, Anderson R.
    Boccato, Levy
    Borin, Edson
    INTELLIGENT SYSTEMS, BRACIS 2024, PT III, 2025, 15414 : 306 - 320
  • [23] An Online Reinforcement Learning Method for Multi-Zone Ventilation Control With Pre-Training
    Cui, Can
    Li, Chunxiao
    Li, Ming
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2023, 70 (07) : 7163 - 7172
  • [24] A Multi-strategy-based Pre-training Method for Cold-start Recommendation
    Hao, Bowen
    Yin, Hongzhi
    Zhang, Jing
    Li, Cuiping
    Chen, Hong
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (02)
  • [25] Multi-Faceted Knowledge-Driven Pre-Training for Product Representation Learning
    Zhang, Denghui
    Liu, Yanchi
    Yuan, Zixuan
    Fu, Yanjie
    Chen, Haifeng
    Xiong, Hui
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) : 7239 - 7250
  • [26] Trajectory-based fish event classification through pre-training with diffusion models
    Canovi, Noemi
    Ellis, Benjamin A.
    Sordalen, Tonje K.
    Allken, Vaneeda
    Halvorsen, Kim T.
    Malde, Ketil
    Beyan, Cigdem
    ECOLOGICAL INFORMATICS, 2024, 82
  • [27] Information theory-guided heuristic progressive multi-view coding
    Li, Jiangmeng
    Gao, Hang
    Qiang, Wenwen
    Zheng, Changwen
    NEURAL NETWORKS, 2023, 167 : 415 - 432
  • [28] Multi-Level Pre-Training for Encrypted Network Traffic Classification
    Park, Jee-Tae
    Choi, Yang-Seo
    Cho, Bu-Seung
    Kim, Seung-Hae
    Kim, Myung-Sup
    IEEE ACCESS, 2025, 13 : 68643 - 68659
  • [29] Multi-View Evolutionary Training for Unsupervised Domain Adaptive Person Re-Identification
    Gu, Jianyang
    Chen, Weihua
    Luo, Hao
    Wang, Fan
    Li, Hao
    Jiang, Wei
    Mao, Weijie
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 344 - 356
  • [30] CvFormer: Cross-view transFormers with pre-training for fMRI analysis of human brain
    Meng, Xiangzhu
    Wei, Wei
    Liu, Qiang
    Wang, Yu
    Li, Min
    Wang, Liang
    PATTERN RECOGNITION LETTERS, 2024, 186 : 85 - 90