From coarse to fine: multi-level feature fusion network for fine-grained image retrieval

被引:3
|
作者
Wang, Shijie [1 ]
Wang, Zhihui [1 ,2 ]
Wang, Ning [1 ]
Wang, Hong [1 ]
Li, Haojie [1 ,2 ]
机构
[1] Dalian Univ Technol, Int Sch Informat Sci & Engn, Dalian, Peoples R China
[2] Key Lab Ubiquitous Network & Serv Software Liaoni, Dalian, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolutional neural network; Multi-level feature fusion; Fine-grained image retrieval;
D O I
10.1007/s00530-022-00899-6
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fine-grained image retrieval (FGIR) has received extensive attention in academia and industry. Despite the tremendous progress, the issue of large intra-class differences and small inter-class differences is still open. Existing fine-grained image classification works, similar to FGIR, focus on learning discriminative local features to solve the above-motioned challenge. Based on this observation, it is unreasonable to use only the global features(i.e. object features or image features) and ignore the discriminable local features(i.e., patch features) for FGIR. In this paper, we propose a novel coarse-to-fine multiple-level feature fusion network (MFFN) that conquers the problem described above via utilizing multi-level features extracting and fusion. MFFN first adopts object-level features for coarse retrieval, a step that reduces the scope of the retrieval. For the fine retrieval stage, we designed the converged multi-level features to deeply mine the intrinsic correlation and complementary information between patch-level and image-level features through a deep belief network (DBN). In addition, for patch-level features, we designed a new constraint to select discriminative patches and proposed a weighted max-polling method to aggregate these distinguishing patches. We achieve the new state-of-the-art performance of the proposed framework on widely-used benchmarks, including CUB-200-2011 and Oxford-Flower-102 datasets.
引用
收藏
页码:1515 / 1528
页数:14
相关论文
共 50 条
  • [21] Complemental Attention Multi-Feature Fusion Network for Fine-Grained Classification
    Miao, Zhuang
    Zhao, Xun
    Wang, Jiabao
    Li, Yang
    Li, Hang
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1983 - 1987
  • [22] Multi-Level Fine-Grained Interactions for Collaborative Filtering
    Feng, Xingjie
    Zeng, Yunze
    IEEE ACCESS, 2019, 7 : 143169 - 143184
  • [23] Lifelong Fine-Grained Image Retrieval
    Chen, Wei
    Xu, Haoyang
    Pu, Nan
    Liu, Yu
    Lao, Mingrui
    Wang, Weiping
    Liu, Li
    Lew, Michael S.
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 7533 - 7544
  • [24] Coordinate feature fusion networks for fine-grained image classification
    Kaiyang Liao
    Gang Huang
    Yuanlin Zheng
    Guangfeng Lin
    Congjun Cao
    Signal, Image and Video Processing, 2023, 17 : 807 - 815
  • [25] Coordinate feature fusion networks for fine-grained image classification
    Liao, Kaiyang
    Huang, Gang
    Zheng, Yuanlin
    Lin, Guangfeng
    Cao, Congjun
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (03) : 807 - 815
  • [26] Multi Fine-Grained Fusion Network for Depression Detection
    Zhou, Li
    Liu, Zhenyu
    Li, Yutong
    Duan, Yuchi
    Yu, Huimin
    Hu, Bin
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (08)
  • [27] Multi-FusNet: fusion mapping of features for fine-grained image retrieval networks
    Cui, Xiaohui
    Li, Huan
    Liu, Lei
    Wang, Sheng
    Xu, Fu
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [28] Fine-grained Global Perception Multi-focus Image Fusion Network
    Wu K.
    Mei Y.
    Hunan Daxue Xuebao/Journal of Hunan University Natural Sciences, 2023, 50 (12): : 10 - 18
  • [29] Multi-Grained Selection and Fusion for Fine-Grained Image Representation
    Jiang, Jianrong
    Wang, Hongxing
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [30] Adaptive Feature Fusion Embedding Network for Few Shot Fine-Grained Image Classification
    Xie, Yaohua
    Zhang, Weichuan
    Ren, Jie
    Jing, Junfeng
    Computer Engineering and Applications, 2024, 59 (03) : 184 - 192