Multi-View Graph Matching for 3D Model Retrieval

被引:4
作者
Su, Yu-Ting [1 ]
Li, Wen-Hui [1 ]
Nie, Wei-Zhi [1 ]
Liu, An-An [1 ]
机构
[1] Tianjin Univ, 92 Weijin Rd, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
3D model retrieval; graph matching; unsupervised learning; OBJECT RETRIEVAL; RECOGNITION; CLASSIFICATION; SEARCH;
D O I
10.1145/3387920
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
3D model retrieval has been widely utilized in numerous domains, such as computer-aided design, digital entertainment, and virtual reality. Recently, many graph-based methods have been proposed to address this task by using multi-viewinformation of 3D models. However, these methods are always constrained bymanyto-many graph matching for the similarity measure between pairwise models. In this article, we propose a multi-view graph matching method (MVGM) for 3D model retrieval. The proposed method can decompose the complicated multi-view graph-based similarity measure into multiple single-view graph-based similarity measures and fusion. First, we present the method for single-view graph generation, and we further propose the novel method for the similarity measure in a single-view graph by leveraging both node-wise context and model-wise context. Then, we propose multi-view fusion with diffusion, which can collaboratively integrate multiple single-view similarities w.r.t. different viewpoints and adaptively learn their weights, to compute the multi-view similarity between pairwise models. In this way, the proposed method can avoid the difficulty in the definition and computation of the traditional high-order graph. Moreover, this method is unsupervised and does not require a large-scale 3D dataset for model learning. We conduct evaluations on four popular and challenging datasets. The extensive experiments demonstrate the superiority and effectiveness of the proposed method compared against the state of the art. In particular, this unsupervised method can achieve competitive performances against the most recent supervised and deep learning method.
引用
收藏
页数:20
相关论文
共 50 条
[41]   The i3DPost multi-view and 3D human action/interaction database [J].
Gkalelis, Nikolaos ;
Kim, Hansung ;
Hilton, Adrian ;
Nikolaidis, Nikos ;
Pitas, Ioannis .
2009 CONFERENCE FOR VISUAL MEDIA PRODUCTION: CVMP 2009, 2009, :159-168
[42]   Latent Heterogeneous Graph Network for Incomplete Multi-View Learning [J].
Zhu, Pengfei ;
Yao, Xinjie ;
Wang, Yu ;
Cao, Meng ;
Hui, Binyuan ;
Zhao, Shuai ;
Hu, Qinghua .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :3033-3045
[43]   Multi-View Object Retrieval via Multi-Scale Topic Models [J].
Hong, Richang ;
Hu, Zhenzhen ;
Wang, Ruxin ;
Wang, Meng ;
Tao, Dacheng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (12) :5814-5827
[44]   Multi-View Saliency Guided Deep Neural Network for 3-D Object Retrieval and Classification [J].
Zhou, He-Yu ;
Liu, An-An ;
Nie, Wei-Zhi ;
Nie, Jie .
IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (06) :1496-1506
[45]   Multi-view clustering with filtered bipartite graph [J].
Ji, Jintian ;
Peng, Hailei ;
Feng, Songhe .
APPLIED INTELLIGENCE, 2025, 55 (07)
[46]   MD-Mamba: Feature extractor on 3D representation with multi-view depth [J].
Li, Qihui ;
Li, Zongtan ;
Tian, Lianfang ;
Du, Qiliang ;
Lu, Guoyu .
IMAGE AND VISION COMPUTING, 2025, 154
[47]   Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation [J].
Smith, Edward ;
Fujimoto, Scott ;
Meger, David .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
[48]   3D Point Cloud Object Detection with Multi-View Convolutional Neural Network [J].
Pang, Guan ;
Neumann, Ulrich .
2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, :585-590
[49]   3D Point Cloud Recognition Based on a Multi-View Convolutional Neural Network [J].
Zhang, Le ;
Sun, Jian ;
Zheng, Qiang .
SENSORS, 2018, 18 (11)
[50]   Classification of 3D Archaeological Objects Using Multi-View Curvature Structure Signatures [J].
Canul-Ku, Mario ;
Hasimoto-Beltran, Rogelio ;
Jimenez-Badillo, Diego ;
Ruiz-Correa, Salvador ;
Roman-Rangel, Edgar .
IEEE ACCESS, 2019, 7 :3298-3313