共 50 条
Multi-objective meta-learning
被引:1
|作者:
Ye, Feiyang
[1
,2
]
Lin, Baijiong
[3
]
Yue, Zhixiong
[1
,2
]
Zhang, Yu
[1
,6
]
Tsang, Ivor W.
[2
,4
,5
]
机构:
[1] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen, Peoples R China
[2] Univ Technol Sydney, Australian Artificial Intelligence Inst, Sydney, Australia
[3] Hong Kong Univ Sci & Technol Guangzhou, Guangzhou, Peoples R China
[4] Agcy Sci Technol & Res, Ctr Frontier AI Res, Singapore, Singapore
[5] Agcy Sci Technol & Res, Inst High Performance Comp, Singapore, Singapore
[6] Shanghai Artificial Intelligence Lab, Shanghai, Peoples R China
关键词:
Meta learning;
Multi-objective optimization;
Multi-task learning;
GRADIENT DESCENT;
OPTIMIZATION;
D O I:
10.1016/j.artint.2024.104184
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Meta-learning has arisen as a powerful tool for many machine learning problems. With multiple factors to be considered when designing learning models for real-world applications, meta- learning with multiple objectives has attracted much attention recently. However, existing works either linearly combine multiple objectives into one objective or adopt evolutionary algorithms to handle it, where the former approach needs to pay high computational cost to tune the combination coefficients while the latter approach is computationally heavy and incapable to be integrated into gradient-based optimization. To alleviate those limitations, in this paper, we aim to propose a generic gradient-based Multi-Objective Meta-Learning (MOML) framework with applications in many machine learning problems. Specifically, the MOML framework formulates the objective function of meta-learning with multiple objectives as a Multi-Objective Bi-Level optimization Problem (MOBLP) where the upper-level subproblem is to solve several possibly conflicting objectives for the meta-learner. Different from those existing works, in this paper, we propose a gradient-based algorithm to solve the MOBLP. Specifically, we devise the first gradient-based optimization algorithm by alternately solving the lower-level and upper-level subproblems via the gradient descent method and the gradient-based multi-objective optimization method, respectively. Theoretically, we prove the convergence property and provide a non- asymptotic analysis of the proposed gradient-based optimization algorithm. Empirically, extensive experiments justify our theoretical results and demonstrate the superiority of the proposed MOML framework for different learning problems, including few-shot learning, domain adaptation, multitask learning, neural architecture search, and reinforcement learning. The source code of MOML is available at https://github .com /Baijiong -Lin /MOML.
引用
收藏
页数:24
相关论文