Cross-Domain Correspondence for Sketch-Based 3D Model Retrieval Using Convolutional Neural Network and Manifold Ranking

被引:2
作者
Jiao, Shichao [1 ]
Han, Xie [1 ]
Xiong, Fengguang [1 ]
Sun, Fusheng [1 ]
Zhao, Rong [1 ]
Kuang, Liqun [1 ]
机构
[1] North Univ China, Sch Data Sci & Technol, Taiyuan 030051, Peoples R China
基金
中国国家自然科学基金;
关键词
Sketch; 3D model retrieval; deep learning; semantic labels; manifold ranking; convolutional neural network; SHAPE RETRIEVAL; FEATURES;
D O I
10.1109/ACCESS.2020.3006585
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the huge difference in the representation of sketches and 3D models, sketch-based 3D model retrieval is a challenging problem in the areas of graphics and computer vision. Some state-of-the-art approaches usually extract features from 2D sketches and produce multiple projection views of 3D models, and then select one view of 3D models to match sketch. It's hard to find "the best view" and views from different perspectives of a 3D model may be completely different. Other methods apply learning features to retrieve 3D models based on 2D sketch. However, sketches are abstract images and are usually drawn subjectively. It is difficult to be learned accurately. To address these problems, we propose cross-domain correspondence method for sketch-based 3D model retrieval based on manifold ranking. Specifically, we first extract learning features of sketches and 3D models by two-parts CNN structures. Subsequently, we generate cross-domain undirected graphs using learning features and semantic labels to create correspondence between sketches and 3D models. Finally, the retrieval results are computed by manifold ranking. Experimental results on SHREC 13 and SHREC 14 datasets show the superior performance in all 7 standard metrics, compared to the state-of-the-art approaches.
引用
收藏
页码:121584 / 121595
页数:12
相关论文
共 43 条
[1]   Deep Common Semantic Space Embedding for Sketch-Based 3D Model Retrieval [J].
Bai, Jing ;
Wang, Mengjie ;
Kong, Dexin .
ENTROPY, 2019, 21 (04)
[2]   GIFT: A Real-time and Scalable 3D Shape Search Engine [J].
Bai, Song ;
Bai, Xiang ;
Zhou, Zhichao ;
Zhang, Zhaoxiang ;
Latecki, Longin Jan .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :5023-5032
[3]   Deep Cross-Modality Adaptation via Semantics Preserving Adversarial Learning for Sketch-Based 3D Shape Retrieval [J].
Chen, Jiaxin ;
Fang, Yi .
COMPUTER VISION - ECCV 2018, PT XIII, 2018, 11217 :624-640
[4]  
Dai GX, 2017, AAAI CONF ARTIF INTE, P4002
[5]   Scale-Invariant Features for 3-D Mesh Models [J].
Darom, Tal ;
Keller, Yosi .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2012, 21 (05) :2758-2769
[6]   Triplet-Based Deep Hashing Network for Cross-Modal Retrieval [J].
Deng, Cheng ;
Chen, Zhaojia ;
Liu, Xianglong ;
Gao, Xinbo ;
Tao, Dacheng .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (08) :3893-3903
[7]  
Eitz M., 2010, P ACM SIGGRAPH TALKS, P26
[8]   How Do Humans Sketch Objects? [J].
Eitz, Mathias ;
Hays, James ;
Alexa, Marc .
ACM TRANSACTIONS ON GRAPHICS, 2012, 31 (04)
[9]   Ranking on Cross-Domain Manifold for Sketch-based 3D Model Retrieval [J].
Furuya, Takahiko ;
Ohbuchi, Ryutarou .
2013 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW), 2013, :274-281
[10]   Triplet-Center Loss for Multi-View 3D Object Retrieval [J].
He, Xinwei ;
Zhou, Yang ;
Zhou, Zhichao ;
Bai, Song ;
Bai, Xiang .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1945-1954