Graph-based fine-grained model selection for multi-source domain

被引:2
|
作者
Hu, Zhigang [1 ]
Huang, Yuhang [1 ]
Zheng, Hao [1 ]
Zheng, Meiguang [1 ]
Liu, JianJun [1 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Model selection; Transfer learning; Graph neural networks; Image classification;
D O I
10.1007/s10044-023-01176-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The prosperity of datasets and model architectures has led to the development of pretrained source models, which simplified the learning process in multi-domain transfer learning. However, challenges such as data complexity, domain shifts, and performance limitations make it difficult to determine which source model to transfer. To meet these challenges, source model selection has emerged as a promising approach for choosing the best model for a given target domain. Most literature utilizes transferability estimation combined with statistical methods to deduce the model selection probability, which is a coarse-grained method that selects a single model with limited accuracy and applicability in multi-source domains. To break through this limitation, we propose a graph-based fine-grained multi-source model selection method (GFMS) that aims to adaptively select the best source model for any single target domain data. Specifically, our proposed method comprises three main components: building a source model library through cross-training; generating the selection strategy by exploring the similarities among the data features, the associations between the features and models based on graph neural networks; blending the selected models using a weighted approach to obtain the best model adaptively. Experimental results demonstrate that the proposed adaptive method achieves higher accuracy in both model selection and image classification than the current state-of-the-art methods on compared datasets.
引用
收藏
页码:1481 / 1492
页数:12
相关论文
共 50 条
  • [41] Multi-source to multi-target domain adaptation method based on similarity measurement
    Wu, Lan
    Wang, Han
    Yao, Yuan
    IET IMAGE PROCESSING, 2024, 18 (01) : 34 - 46
  • [42] GBP: Graph convolutional network embedded in bilinear pooling for fine-grained encoding
    Du, Yinan
    Tang, Jian
    Rui, Ting
    Li, Xinxin
    Yang, Chengsong
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 116
  • [43] Text-Based Fine-Grained Emotion Prediction
    Singh, Gargi
    Brahma, Dhanajit
    Rai, Piyush
    Modi, Ashutosh
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2024, 15 (02) : 405 - 416
  • [44] GKA: Graph-guided knowledge association for fine-grained visual categorization
    Wang, Yuetian
    Ye, Shuo
    Hou, Wenjin
    Xu, Duanquan
    You, Xinge
    NEUROCOMPUTING, 2025, 634
  • [45] Fine-Grained Visual Computing Based on Deep Learning
    Lv, Zhihan
    Qiao, Liang
    Singh, Amit Kumar
    Wang, Qingjun
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (01)
  • [46] Few-shot image classification using graph neural network with fine-grained feature descriptors
    Ganesan, Priyanka
    Jagatheesaperumal, Senthil Kumar
    Hassan, Mohammad Mehedi
    Pupo, Francesco
    Fortino, Giancarlo
    NEUROCOMPUTING, 2024, 610
  • [47] Press-Plate State Recognition Based on Improved Bilinear Fine-Grained Model
    Yang Qianwen
    Zhou Ke
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (20)
  • [48] ConvNeXt-Based Fine-Grained Image Classification and Bilinear Attention Mechanism Model
    Li, Zhiheng
    Gu, Tongcheng
    Li, Bing
    Xu, Wubin
    He, Xin
    Hui, Xiangyu
    APPLIED SCIENCES-BASEL, 2022, 12 (18):
  • [49] Efficient multi-granularity network for fine-grained image classification
    Jiabao Wang
    Yang Li
    Hang Li
    Xun Zhao
    Rui Zhang
    Zhuang Miao
    Journal of Real-Time Image Processing, 2022, 19 : 853 - 866
  • [50] Efficient multi-granularity network for fine-grained image classification
    Wang, Jiabao
    Li, Yang
    Li, Hang
    Zhao, Xun
    Zhang, Rui
    Miao, Zhuang
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2022, 19 (05) : 853 - 866