On the Imaginary Wings: Text-Assisted Complex-Valued Fusion Network for Fine-Grained Visual Classification

被引:10
作者
Guan, Xiang [1 ]
Yang, Yang [1 ]
Li, Jingjing [1 ]
Zhu, Xiaofeng [1 ]
Song, Jingkuan [1 ]
Shen, Heng Tao [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Ctr Future Media, Chengdu 611731, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518066, Peoples R China
基金
中国国家自然科学基金;
关键词
Complex values; fine-grained visual classification (FGVC); graph convolutional networks (GCNs); multimodal;
D O I
10.1109/TNNLS.2021.3126046
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fine-grained visual classification (FGVC) is challenging due to the interclass similarity and intraclass variation in datasets. In this work, we explore the great merit of complex values in introducing an imaginary part for modeling data uncertainty (e.g., different points on the complex plane can describe the same state) and graph convolutional networks (GCNs) in learning interdependently among classes to simultaneously tackle the above two major challenges. To the end, we propose a novel approach, termed text-assisted complex-valued fusion network (TA-CFN). Specifically, we expand each feature from 1-D real values to 2-D complex value by disassembling feature maps, thereby enabling the extension of traditional deep convolutional neural networks over the complex domain. Then, we fuse the real and imaginary parts of complex features through complex projection and modulus operation. Finally, we build an undirected graph over the object labels with the assistance of a text corpus, and a GCN is learned to map this graph into a set of classifiers. The benefits are in two folds: 1) complex features allow for a richer algebraic structure to better model the large variation within the same category and 2) leveraging the interclass dependencies brought by the GCN to capture key factors of the slight variation among different categories. We conduct extensive experiments to verify that our proposed model can achieve the state-of-the-art performance on two widely used FGVC datasets.
引用
收藏
页码:5112 / 5121
页数:10
相关论文
共 50 条
  • [21] Lin T.-Y., 2017, BMVC
  • [22] Bilinear CNN Models for Fine-grained Visual Recognition
    Lin, Tsung-Yu
    RoyChowdhury, Aruni
    Maji, Subhransu
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1449 - 1457
  • [23] Monti F, 2018, Arxiv, DOI arXiv:1806.00770
  • [24] Object-Part Attention Model for Fine-Grained Image Classification
    Peng, Yuxin
    He, Xiangteng
    Zhao, Junjie
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (03) : 1487 - 1500
  • [25] Pfeifenberger L, 2019, INT CONF ACOUST SPEE, P2902, DOI [10.1109/ICASSP.2019.8683517, 10.1109/icassp.2019.8683517]
  • [26] Learning Deep Representations of Fine-Grained Visual Descriptions
    Reed, Scott
    Akata, Zeynep
    Lee, Honglak
    Schiele, Bernt
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 49 - 58
  • [27] Reichert D. P., 2014, P INT C LEARN REPR
  • [28] Complex-Valued Neural Networks With Nonparametric Activation Functions
    Scardapane, Simone
    Van Vaerenbergh, Steven
    Hussain, Amir
    Uncini, Aurelio
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2020, 4 (02): : 140 - 150
  • [29] Shen K, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P941
  • [30] Weakly-Shared Deep Transfer Networks for Heterogeneous-Domain Knowledge Propagation
    Shu, Xiangbo
    Qi, Guo-Jun
    Tang, Jinhui
    Wang, Jingdong
    [J]. MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 35 - 44