Towards Sharper Risk Bounds for Agnostic Multi-objective Learning

被引:0
作者
Wei, Bojian [1 ]
Li, Jian [1 ]
Wang, Weiping [1 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Excess risk bound; agnostic learning; multiobjective; generalization; ERROR;
D O I
10.1109/IJCNN54540.2023.10191519
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many real-world machine learning tasks have multiple objectives, such as multi-object detection and product recommendation, which can not be optimized directly through a single objective function. Fortunately, multi-objective learning can be used to solve this problem efficiently by some vector-valued algorithms. Recently, researchers find that the performance of multi-objective learning will be impaired when the mixture weights are unknown, where a fixed algorithm is difficult to select the optimal model in the hypothesis space. Thus, agnostic multi-objective learning has been proposed, which provides an effective approach to solve the problem of simultaneously optimizing multiple objectives with unknown mixture weights. In this way, a proper model will be selected because the agnostic multi-objective learning can improve the worst case of the hypothesis space. However, the current generalization error bounds for agnostic multi-objective learning can not converge faster than O(1/root n), which limits the generalization guarantee. In this paper, we provide a sharper excess risk bound for agnostic multi-objective learning with convergence rate of O(1/n), which is much faster than the existing results and matches the best theoretical results of centralized learning. Based on our theory, we then propose a novel algorithm to improve the generalization performance of agnostic multi-objective learning.
引用
收藏
页数:6
相关论文
共 35 条
  • [31] Torrey Lisa, 2010, Handbook of Research on MachiIne Learning Applications and Trends: Algorithms, Methods, and Techniques, P242, DOI [DOI 10.4018/978-1-60566-766-9.CH011, DOI 10.4018/978-1-60566-766-9]
  • [32] Tseng Paul, 2008, SIAM Journal on Optimization, V2
  • [33] Vapnik V, 1999, NATURE STAT LEARNING
  • [34] Wei B., 2021, PACIFIC RIM INT C AR, V13031, P33
  • [35] Xiao H., 2017, ARXIV