EFCMF: A Multimodal Robustness Enhancement Framework for Fine-Grained Recognition

被引:0
作者
Zou, Rongping [1 ,2 ,3 ]
Zhu, Bin [1 ,2 ,3 ]
Chen, Yi [1 ,2 ,3 ]
Xie, Bo [1 ,2 ,3 ]
Shao, Bin [1 ,2 ,3 ]
机构
[1] Natl Univ Def Technol, Coll Elect Engn, Hefei 230037, Peoples R China
[2] State Key Lab Pulsed Power Laser Technol, Hefei 230037, Peoples R China
[3] Key Lab Infrared & Low Temp Plasma Anhui Prov, Hefei 230037, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 03期
基金
美国国家科学基金会;
关键词
fine-grained recognition; multimodal; modal missing; adversarial examples;
D O I
10.3390/app13031640
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Fine-grained recognition has many applications in many fields and aims to identify targets from subcategories. This is a highly challenging task due to the minor differences between subcategories. Both modal missing and adversarial sample attacks are easily encountered in fine-grained recognition tasks based on multimodal data. These situations can easily lead to the model needing to be fixed. An Enhanced Framework for the Complementarity of Multimodal Features (EFCMF) is proposed in this study to solve this problem. The model's learning of multimodal data complementarity is enhanced by randomly deactivating modal features in the constructed multimodal fine-grained recognition model. The results show that the model gains the ability to handle modal missing without additional training of the model and can achieve 91.14% and 99.31% accuracy on Birds and Flowers datasets. The average accuracy of EFCMF on the two datasets is 52.85%, which is 27.13% higher than that of Bi-modal PMA when facing four adversarial example attacks, namely FGSM, BIM, PGD and C&W. In the face of missing modal cases, the average accuracy of EFCMF is 76.33% on both datasets respectively, which is 32.63% higher than that of Bi-modal PMA. Compared with existing methods, EFCMF is robust in the face of modal missing and adversarial example attacks in multimodal fine-grained recognition tasks. The source code is available at https://github.com/RPZ97/EFCMF (accessed on 8 January 2023).
引用
收藏
页数:15
相关论文
共 42 条
  • [1] Carion Nicolas, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P213, DOI 10.1007/978-3-030-58452-8_13
  • [2] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [3] Chen T., 2018, ARXIV
  • [4] Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning
    Cui, Yin
    Song, Yang
    Sun, Chen
    Howard, Andrew
    Belongie, Serge
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4109 - 4118
  • [5] Deng J, 2009, IEEE C COMP VIS PATT, P248, DOI DOI 10.1109/CVPR.2009.5206848
  • [6] Fine-grained Image Classification via Combining Vision and Language
    He, Xiangteng
    Peng, Yuxin
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 7332 - 7340
  • [7] Knowledge Graph Representation Fusion Framework for Fine-Grained Object Recognition in Smart Cities
    He, Yang
    Tian, Ling
    Zhang, Lizong
    Zeng, Xi
    [J]. COMPLEXITY, 2021, 2021
  • [8] VegFru: A Domain-Specific Dataset for Fine-grained Visual Categorization
    Hou, Saihui
    Feng, Yushan
    Wang, Zilei
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 541 - 549
  • [9] Densely Connected Convolutional Networks
    Huang, Gao
    Liu, Zhuang
    van der Maaten, Laurens
    Weinberger, Kilian Q.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2261 - 2269
  • [10] Part-Stacked CNN for Fine-Grained Visual Categorization
    Huang, Shaoli
    Xu, Zhe
    Tao, Dacheng
    Zhang, Ya
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1173 - 1182