On the Imaginary Wings: Text-Assisted Complex-Valued Fusion Network for Fine-Grained Visual Classification

被引:10
作者
Guan, Xiang [1 ]
Yang, Yang [1 ]
Li, Jingjing [1 ]
Zhu, Xiaofeng [1 ]
Song, Jingkuan [1 ]
Shen, Heng Tao [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Ctr Future Media, Chengdu 611731, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518066, Peoples R China
基金
中国国家自然科学基金;
关键词
Complex values; fine-grained visual classification (FGVC); graph convolutional networks (GCNs); multimodal;
D O I
10.1109/TNNLS.2021.3126046
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fine-grained visual classification (FGVC) is challenging due to the interclass similarity and intraclass variation in datasets. In this work, we explore the great merit of complex values in introducing an imaginary part for modeling data uncertainty (e.g., different points on the complex plane can describe the same state) and graph convolutional networks (GCNs) in learning interdependently among classes to simultaneously tackle the above two major challenges. To the end, we propose a novel approach, termed text-assisted complex-valued fusion network (TA-CFN). Specifically, we expand each feature from 1-D real values to 2-D complex value by disassembling feature maps, thereby enabling the extension of traditional deep convolutional neural networks over the complex domain. Then, we fuse the real and imaginary parts of complex features through complex projection and modulus operation. Finally, we build an undirected graph over the object labels with the assistance of a text corpus, and a GCN is learned to map this graph into a set of classifiers. The benefits are in two folds: 1) complex features allow for a richer algebraic structure to better model the large variation within the same category and 2) leveraging the interclass dependencies brought by the GCN to capture key factors of the slight variation among different categories. We conduct extensive experiments to verify that our proposed model can achieve the state-of-the-art performance on two widely used FGVC datasets.
引用
收藏
页码:5112 / 5121
页数:10
相关论文
共 50 条
  • [11] Hu T, 2019, Arxiv, DOI arXiv:1808.02152
  • [12] Hu T, 2019, Arxiv, DOI arXiv:1901.09891
  • [13] Khosla A., 2011, P 1 WORKSHOP FINE GR
  • [14] Kipf, 2016, ARXIV160902907, P1
  • [15] Low-rank Bilinear Pooling for Fine-Grained Classification
    Kong, Shu
    Fowlkes, Charless
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 7025 - 7034
  • [16] Krause J, 2015, PROC CVPR IEEE, P5546, DOI 10.1109/CVPR.2015.7299194
  • [17] DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia
    Lehmann, Jens
    Isele, Robert
    Jakob, Max
    Jentzsch, Anja
    Kontokostas, Dimitris
    Mendes, Pablo N.
    Hellmann, Sebastian
    Morsey, Mohamed
    van Kleef, Patrick
    Auer, Soeren
    Bizer, Christian
    [J]. SEMANTIC WEB, 2015, 6 (02) : 167 - 195
  • [18] I read, I saw, I tell: Texts Assisted Fine-Grained Visual Classification
    Li, Jingjing
    Zhu, Lei
    Huang, Zi
    Lu, Ke
    Zhao, Jidong
    [J]. PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 663 - 671
  • [19] Li QC, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4139
  • [20] Lin D, 2015, PROC CVPR IEEE, P1666, DOI 10.1109/CVPR.2015.7298775