Boosting urban prediction tasks with domain-sharing knowledge via meta-learning

被引:5
|
作者
Wang, Dongkun [1 ]
Peng, Jieyang [1 ]
Tao, Xiaoming [1 ]
Duan, Yiping [1 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Data mining; Traffic prediction; Meta learning; Graph neural network; AIR-QUALITY;
D O I
10.1016/j.inffus.2024.102324
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Urban prediction tasks refer to predicting urban indicators ( e.g. , traffic, temperature, etc.) using urban big data, which is crucial for understanding the urban patterns, and further benefits the urban public administration. An empirical study indicates that there are correlated patterns among urban prediction tasks from various domains, which suggests the existence of domain -sharing knowledge. Aggregating such domain -sharing knowledge would significantly benefit urban prediction tasks. However, as a widely used learning paradigm for knowledge aggregation, existing meta -learning methods, especially gradient -based methods, can only work for singledomain tasks. To solve the problem, we propose Cross -Domain Meta -Learning (CDML), a flexible framework for aggregating domain -sharing knowledge from cross -domain urban prediction tasks. Specifically, the core architecture of CDML is the model fusion block that includes (1) meta -model, shared by cross -domain tasks for capturing domain -sharing knowledge; (2) domain -specific model, shared only by the same -domain tasks for preserving domain -specific knowledge; and (3) knowledge fusion unit, for combining both the domainsharing/specific knowledge for good generalization. Moreover, we develop asynchronous meta -training and adaption strategy strategies to further guarantee cross -domain generalization. The extensive experimental results validate the effectiveness of the proposed framework with the superior ability of boosting existing urban prediction models, quick adaption, and the potential for simplifying models.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Contextualizing Meta-Learning via Learning to Decompose
    Ye, Han-Jia
    Zhou, Da-Wei
    Hong, Lanqing
    Li, Zhenguo
    Wei, Xiu-Shen
    Zhan, De-Chuan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (01) : 117 - 133
  • [22] Knowledge-embedded meta-learning model for lift coefficient prediction of airfoils
    Xie, Hairun
    Wang, Jing
    Zhang, Miao
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 233
  • [23] Towards generalization on real domain for single image dehazing via meta-learning
    Ren, Wenqi
    Sun, Qiyu
    Zhao, Chaoqiang
    Tang, Yang
    CONTROL ENGINEERING PRACTICE, 2023, 133
  • [24] Meta-learning the invariant representation for domain generalization
    Jia, Chen
    Zhang, Yue
    MACHINE LEARNING, 2024, 113 (04) : 1661 - 1681
  • [25] Meta-learning the invariant representation for domain generalization
    Chen Jia
    Yue Zhang
    Machine Learning, 2024, 113 : 1661 - 1681
  • [26] Domain generalization through meta-learning: a survey
    Khoee, Arsham Gholamzadeh
    Yu, Yinan
    Feldt, Robert
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (10)
  • [27] Boosting Meta-Learning Cold-Start Recommendation with Graph Neural Network
    Liu, Han
    Lin, Hongxiang
    Zhang, Xiaotong
    Ma, Fenglong
    Chen, Hongyang
    Wang, Lei
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 4105 - 4109
  • [28] Meta-learning for efficient unsupervised domain adaptation
    Vettoruzzo, Anna
    Bouguelia, Mohamed-Rafik
    Roegnvaldsson, Thorsteinn
    NEUROCOMPUTING, 2024, 574
  • [29] Learning to Balance Local Losses via Meta-Learning
    Yoa, Seungdong
    Jeon, Minkyu
    Oh, Youngjin
    Kim, Hyunwoo J.
    IEEE ACCESS, 2021, 9 : 130834 - 130844
  • [30] Meta-Learning Related Tasks with Recurrent Networks: Optimization and Generalization
    Nguyen, Thy
    Younger, A. Steven
    Redd, Emmett
    Obafemi-Ajayi, Tayo
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,