Learning to Generate Parameters of ConvNets for Unseen Image Data

被引:0
作者
Wang, Shiye [1 ]
Feng, Kaituo [1 ]
Li, Changsheng [1 ]
Yuan, Ye [1 ]
Wang, Guoren [1 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
关键词
Training; Task analysis; Correlation; Metalearning; Graphics processing units; Vectors; Adaptive systems; Parameter generation; hypernetwork; adaptive hyper-recurrent units;
D O I
10.1109/TIP.2024.3445731
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e.g., SGD or Adam) to learn network parameters, making training very time- and resource-intensive. In this paper, we propose a new training paradigm and formulate the parameter learning of ConvNets into a prediction task: considering that there exist correlations between image datasets and their corresponding optimal network parameters of a given ConvNet, we explore if we can learn a hyper-mapping between them to capture the relations, such that we can directly predict the parameters of the network for an image dataset never seen during the training phase. To do this, we put forward a new hypernetwork-based model, called PudNet, which intends to learn a mapping between datasets and their corresponding network parameters, then predicts parameters for unseen data with only a single forward propagation. Moreover, our model benefits from a series of adaptive hyper-recurrent units sharing weights to capture the dependencies of parameters among different network layers. Extensive experiments demonstrate that our proposed method achieves good efficacy for unseen image datasets in two kinds of settings: Intra-dataset prediction and Inter-dataset prediction. Our PudNet can also well scale up to large-scale datasets, e.g., ImageNet-1K. It takes 8,967 GPU seconds to train ResNet-18 on the ImageNet-1K using GC from scratch and obtain a top-5 accuracy of 44.65%. However, our PudNet costs only 3.89 GPU seconds to predict the network parameters of ResNet-18 achieving comparable performance (44.92%), more than 2,300 times faster than the traditional training paradigm.
引用
收藏
页码:5577 / 5592
页数:16
相关论文
共 69 条
  • [1] HyperStyle: Sty1eGAN Inversion with HyperNetworks for Real Image Editing
    Alaluf, Yuval
    Tov, Omer
    Mokady, Ron
    Gal, Rinon
    Bermano, Amit
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 18490 - 18500
  • [2] Anil R, 2021, Arxiv, DOI arXiv:2002.09018
  • [3] [Anonymous], 2009, Master's Thesis
  • [4] Contour Detection and Hierarchical Image Segmentation
    Arbelaez, Pablo
    Maire, Michael
    Fowlkes, Charless
    Malik, Jitendra
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (05) : 898 - 916
  • [5] Ba J. L., 2016, arXiv
  • [6] Bibas K, 2021, ADV NEUR IN
  • [7] A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks
    Chen, Jianguo
    Li, Kenli
    Bilal, Kashif
    Zhou, Xu
    Li, Keqin
    Yu, Philip S.
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2019, 30 (05) : 965 - 976
  • [8] Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
    Chen, Yinbo
    Liu, Zhuang
    Xu, Huijuan
    Darrell, Trevor
    Wang, Xiaolong
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9042 - 9051
  • [9] Chen ZK, 2022, AAAI CONF ARTIF INTE, P6342
  • [10] Chen Z, 2023, Arxiv, DOI arXiv:2205.08534