MVItem: A Benchmark for Multi-View Cross-Modal Item Retrieval

被引:0
|
作者
Li, Bo [1 ]
Zhu, Jiansheng [2 ]
Dai, Linlin [3 ]
Jing, Hui [3 ]
Huang, Zhizheng [3 ]
Sui, Yuteng [1 ]
机构
[1] China Acad Railway Sci, Postgrad Dept, Beijing 100081, Peoples R China
[2] China Railway, Dept Sci Technol & Informat, Beijing 100844, Peoples R China
[3] China Acad Railway Sci Corp Ltd, Inst Comp Technol, Beijing 100081, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Annotations; Benchmark testing; Text to image; Deep learning; Contrastive learning; Open source software; Cross-model retrieval; deep learning; item retrieval; contrastive text-image pre-training model; multi-view;
D O I
10.1109/ACCESS.2024.3447872
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The existing text-image pre-training models have demonstrated strong generalization capabilities, however, their performance of item retrieval in real-world scenarios still falls short of expectations. In order to optimize the performance of text-image pre-training model to retrieve items in real scenarios, we present a benchmark called MVItem for exploring multi-view item retrieval based on the open-source dataset MVImgNet. Firstly, we evenly sample items in MVImgNet to obtain 5 images from different views, and automatically annotate this images based on MiniGPT-4. Subsequently, through manual cleaning and comparison, we present a high-quality textual description for each sample. Then, in order to investigate the spatial misalignment problem of item retrieval in real-world scenarios and mitigate the impact of spatial misalignment on retrieval, we devise a multi-view feature fusion strategy and propose a cosine distance balancing method based on Sequential Least Squares Programming (SLSQP) to achieve the fusion of multiple view vectors, namely balancing cosine distance(BCD). On this basis, we select the representative state-of-the-art text-image pre-training retrieval models as baselines, and establish multiple test groups to explore the effectiveness of multi-view information on item retrieval to easing potential spatial misalignment. The experimental results show that the retrieval of fusing multi-view features is generally better than that of the baseline, indicating that multi-view feature fusion is helpful to alleviate the impact of spatial misalignment on item retrieval. Moreover, the proposed feature fusion, balancing cosine distance(BCD), is generally better than that of feature averaging, denoted as balancing euclidean distance(BED) in this work. At the results, we find that the fusion of multiple images with different views is more helpful for text-to-image (T2I) retrieval, and the fusion of a small number of images with large differences in views is more helpful for image-to-image (I2I) retrieval.
引用
收藏
页码:119563 / 119576
页数:14
相关论文
共 50 条
  • [41] A Graph Model for Cross-modal Retrieval
    Wang, Shixun
    Pan, Peng
    Lu, Yansheng
    PROCEEDINGS OF 3RD INTERNATIONAL CONFERENCE ON MULTIMEDIA TECHNOLOGY (ICMT-13), 2013, 84 : 1090 - 1097
  • [42] Semantics Disentangling for Cross-Modal Retrieval
    Wang, Zheng
    Xu, Xing
    Wei, Jiwei
    Xie, Ning
    Yang, Yang
    Shen, Heng Tao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 2226 - 2237
  • [43] Continual learning in cross-modal retrieval
    Wang, Kai
    Herranz, Luis
    van de Weijer, Joost
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3623 - 3633
  • [44] Deep Supervised Cross-modal Retrieval
    Zhen, Liangli
    Hu, Peng
    Wang, Xu
    Peng, Dezhong
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10386 - 10395
  • [45] Cross-modal retrieval with dual optimization
    Qingzhen Xu
    Shuang Liu
    Han Qiao
    Miao Li
    Multimedia Tools and Applications, 2023, 82 : 7141 - 7157
  • [46] Learning DALTS for cross-modal retrieval
    Yu, Zheng
    Wang, Wenmin
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2019, 4 (01) : 9 - 16
  • [47] Sequential Learning for Cross-modal Retrieval
    Song, Ge
    Tan, Xiaoyang
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 4531 - 4539
  • [48] Correspondence Autoencoders for Cross-Modal Retrieval
    Feng, Fangxiang
    Wang, Xiaojie
    Li, Ruifan
    Ahmad, Ibrar
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2015, 12 (01)
  • [49] Cross-modal Retrieval with Label Completion
    Xu, Xing
    Shen, Fumin
    Yang, Yang
    Shen, Heng Tao
    He, Li
    Song, Jingkuan
    MM'16: PROCEEDINGS OF THE 2016 ACM MULTIMEDIA CONFERENCE, 2016, : 302 - 306
  • [50] FedCMR: Federated Cross-Modal Retrieval
    Zong, Linlin
    Xie, Qiujie
    Zhou, Jiahui
    Wu, Peiran
    Zhang, Xianchao
    Xu, Bo
    SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 1672 - 1676