A Meta-Learning Approach to Predicting Performance and Data Requirements

被引:5
|
作者
Jain, Achin [1 ]
Swaminathanl, Gurumurthy [1 ]
Favarol, Paolo [1 ]
Yang, Hao [1 ]
Ravichandrae, Avinash [1 ]
Harutyunyan, Hrayr [1 ,2 ]
Achillel, Alessandro [1 ]
Dabeerl, Onkar [1 ]
Schiele, Bernt [1 ]
Swaminathan, Ashwin [1 ]
Soata, Stefano [1 ]
机构
[1] AWS AI Labs, Seattle, WA 77002 USA
[2] Univ Southern Calif, Los Angeles, CA USA
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR | 2023年
关键词
BENCHMARK;
D O I
10.1109/CVPR52729.2023.00353
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose an approach to estimate the number of samples required for a model to reach a target performance. We find that the power law, the de facto principle to estimate model performance, leads to a large error when using a small dataset (e.g., 5 samples per class) for extrapolation. This is because the log-performance error against the log-dataset size follows a nonlinear progression in the few-shot regime followed by a linear progression in the high-shot regime. We introduce a novel piecewise power law (PPL) that handles the two data regimes differently. To estimate the parameters of the PPL, we introduce a random forest regressor trained via meta learning that generalizes across classification/detection tasks, ResNet/ViT based architectures, and random/pre-trained initializations. The PPL improves the performance estimation on average by 37% across 16 classification and 33% across 10 detection datasets, compared to the power law. We further extend the PPL to provide a confidence bound and use it to limit the prediction horizon that reduces over-estimation of data by 76% on classification and 91% on detection datasets.
引用
收藏
页码:3623 / 3632
页数:10
相关论文
共 50 条
  • [41] Predicting run time of classification algorithms using meta-learning
    Tri Doan
    Jugal Kalita
    International Journal of Machine Learning and Cybernetics, 2017, 8 : 1929 - 1943
  • [42] Learning to learn: a lightweight meta-learning approach with indispensable connections
    Tiwari, Sambhavi
    Gogoi, Manas
    Verma, Shekhar
    Singh, Krishna Pratap
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01):
  • [43] Predicting run time of classification algorithms using meta-learning
    Doan, Tri
    Kalita, Jugal
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2017, 8 (06) : 1929 - 1943
  • [44] An Integrated Federated Learning and Meta-Learning Approach for Mining Operations
    Munagala, Venkat
    Singh, Sankhya
    Thudumu, Srikanth
    Logothetis, Irini
    Bhandari, Sushil
    Bhandari, Amit
    Mouzakis, Kon
    Vasa, Rajesh
    ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I, 2024, 14471 : 379 - 390
  • [45] A Novel Federated Meta-Learning Approach for Discriminating Sedentary Behavior From Wearable Data
    Barros, Pedro H.
    Guevara, Judy C.
    Villas, Leandro
    Guidoni, Daniel
    da Fonseca, Nelson L. S.
    Ramos, Heitor S.
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (19): : 31909 - 31916
  • [46] Click-Based Student Performance Prediction: A Clustering Guided Meta-Learning Approach
    Chu, Yun-Wei
    Tenorio, Elizabeth
    Cruz, Laura
    Douglas, Kerrie
    Lan, Andrew S.
    Brinton, Christopher G.
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 1389 - 1398
  • [47] Personalized facial beauty assessment: a meta-learning approach
    Lebedeva, Irina
    Ying, Fangli
    Gu, Yi
    VISUAL COMPUTER, 2023, 39 (03): : 1095 - 1107
  • [48] Personalized facial beauty assessment: a meta-learning approach
    Irina Lebedeva
    Fangli Ying
    Yi Guo
    The Visual Computer, 2023, 39 : 1095 - 1107
  • [49] Time series classifier recommendation by a meta-learning approach
    Abanda, A.
    Mori, U.
    Lozano, Jose A.
    PATTERN RECOGNITION, 2022, 128
  • [50] A meta-learning approach for selecting image segmentation algorithm
    Aguiar, Gabriel Jonas
    Mantovani, Rafael Gomes
    Mastelini, Saulo M.
    de Carvalho, Andre C. P. F. L.
    Campos, Gabriel F. C.
    Barbon Junior, Sylvio
    PATTERN RECOGNITION LETTERS, 2019, 128 : 480 - 487