Preserving text space integrity for robust compositional zero-shot learning via mixture of pretrained experts

被引:1
作者
Hao, Zehua
Liu, Fang [1 ]
Jiao, Licheng
Du, Yaoyang
Li, Shuo
Wang, Hao
Li, Pengfang
Liu, Xu
Chen, Puhua
机构
[1] Xidian Univ, Sch Artificial Intelligent, 2 Taibai South Rd, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Compositional zero-shot learning; Mixture of pretrained expert; Deep learning; IMAGE; RECOGNITION; FUSION; VIDEO; MODEL;
D O I
10.1016/j.neucom.2024.128773
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the current landscape of Compositional Zero-Shot Learning (CZSL) methods that leverage CLIP, predominant approach is based on prompt learning paradigms. These methods encounter significant putational complexity when dealing with a large number of categories. Additionally, when confronted new classification tasks, there is a necessity to learn the prompts again, which can be both time-consuming and resource-intensive. To address these challenges, We present anew methodology, named the M ixture P retrained E xpert (MoPE), for enhancing Compositional Zero-shot Learning through Logit-Level Fusion Multi Expert Fusion Module. The MoPE skillfully blends the benefits of extensive pre-trained models like Bert, GPT-3 and Word2Vec for effectively tackling Compositional Zero-shot Learning. Firstly, we extract text label space for each language model individually, then map the visual feature vectors to their respective text spaces. This maintains the integrity and structure of the original text space. During this process, pre-trained expert parameters are kept frozen. The mapping of visual features to the corresponding text spaces is subject to learning and could be considered as multiple learnable visual experts. In the model fusion phase, we propose anew fusion strategy that features a gating mechanism that adjusts the contributions of various models dynamically. This enables our approach to adapt more effectively to a range of tasks and data sets. method's robustness is demonstrated by the fact that the language model is not tailored to specific downstream task datasets or losses. This preserves the larger model's topology and expands the potential for application. Preliminary experiments conducted on the UT-Zappos, AO-Clever, and C-GQA datasets indicate that MoPE performs competitively when compared to existing techniques.
引用
收藏
页数:12
相关论文
共 78 条
[11]   Offset equivariant networks and their applications [J].
Cotogni, Marco ;
Cusano, Claudio .
NEUROCOMPUTING, 2022, 502 :110-119
[12]   Combining multiple features for color texture classification [J].
Cusano, Claudio ;
Napoletano, Paolo ;
Schettini, Raimondo .
JOURNAL OF ELECTRONIC IMAGING, 2016, 25 (06)
[13]   ALOHA: a novel probability fusion approach for scoring multi-parameter drug-likeness during the lead optimization stage of drug discovery [J].
Debe, Derek A. ;
Mamidipaka, Ravindra B. ;
Gregg, Robert J. ;
Metz, James T. ;
Gupta, Rishi R. ;
Muchmore, Steven W. .
JOURNAL OF COMPUTER-AIDED MOLECULAR DESIGN, 2013, 27 (09) :771-782
[14]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[15]   View-independent face recognition with mixture of experts [J].
Ebrahimpour, Reza ;
Kabire, Ehsanollah ;
Esteky, Hossein ;
Yousefi, Mohammad Reza .
NEUROCOMPUTING, 2008, 71 (4-6) :1103-1107
[16]   Forgery face detection via adaptive learning from multiple experts [J].
Fu, Xinghe ;
Li, Shengming ;
Yuan, Yike ;
Li, Bin ;
Li, Xi .
NEUROCOMPUTING, 2023, 527 :110-118
[17]   CLIP-Adapter: Better Vision-Language Models with Feature Adapters [J].
Gao, Peng ;
Geng, Shijie ;
Zhang, Renrui ;
Ma, Teli ;
Fang, Rongyao ;
Zhang, Yongfeng ;
Li, Hongsheng ;
Qiao, Yu .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (02) :581-595
[18]  
Gao T., 2021, P 2021 10 INT C COMP, P283
[19]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[20]  
Hao SZ, 2023, Arxiv, DOI [arXiv:2303.15111, 10.48550/arXiv.2303.15111, DOI 10.48550/ARXIV.2303.15111]