Cross-Domain Correspondence for Sketch-Based 3D Model Retrieval Using Convolutional Neural Network and Manifold Ranking

被引:2
作者
Jiao, Shichao [1 ]
Han, Xie [1 ]
Xiong, Fengguang [1 ]
Sun, Fusheng [1 ]
Zhao, Rong [1 ]
Kuang, Liqun [1 ]
机构
[1] North Univ China, Sch Data Sci & Technol, Taiyuan 030051, Peoples R China
来源
IEEE ACCESS | 2020年 / 8卷 / 08期
基金
中国国家自然科学基金;
关键词
Sketch; 3D model retrieval; deep learning; semantic labels; manifold ranking; convolutional neural network; SHAPE RETRIEVAL; FEATURES;
D O I
10.1109/ACCESS.2020.3006585
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the huge difference in the representation of sketches and 3D models, sketch-based 3D model retrieval is a challenging problem in the areas of graphics and computer vision. Some state-of-the-art approaches usually extract features from 2D sketches and produce multiple projection views of 3D models, and then select one view of 3D models to match sketch. It's hard to find "the best view" and views from different perspectives of a 3D model may be completely different. Other methods apply learning features to retrieve 3D models based on 2D sketch. However, sketches are abstract images and are usually drawn subjectively. It is difficult to be learned accurately. To address these problems, we propose cross-domain correspondence method for sketch-based 3D model retrieval based on manifold ranking. Specifically, we first extract learning features of sketches and 3D models by two-parts CNN structures. Subsequently, we generate cross-domain undirected graphs using learning features and semantic labels to create correspondence between sketches and 3D models. Finally, the retrieval results are computed by manifold ranking. Experimental results on SHREC 13 and SHREC 14 datasets show the superior performance in all 7 standard metrics, compared to the state-of-the-art approaches.
引用
收藏
页码:121584 / 121595
页数:12
相关论文
共 43 条
  • [1] [Anonymous], 2014, P EG3DOR 2014
  • [2] Deep Common Semantic Space Embedding for Sketch-Based 3D Model Retrieval
    Bai, Jing
    Wang, Mengjie
    Kong, Dexin
    [J]. ENTROPY, 2019, 21 (04)
  • [3] GIFT: A Real-time and Scalable 3D Shape Search Engine
    Bai, Song
    Bai, Xiang
    Zhou, Zhichao
    Zhang, Zhaoxiang
    Latecki, Longin Jan
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5023 - 5032
  • [4] Deep Cross-Modality Adaptation via Semantics Preserving Adversarial Learning for Sketch-Based 3D Shape Retrieval
    Chen, Jiaxin
    Fang, Yi
    [J]. COMPUTER VISION - ECCV 2018, PT XIII, 2018, 11217 : 624 - 640
  • [5] Dai GX, 2017, AAAI CONF ARTIF INTE, P4002
  • [6] Scale-Invariant Features for 3-D Mesh Models
    Darom, Tal
    Keller, Yosi
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2012, 21 (05) : 2758 - 2769
  • [7] Triplet-Based Deep Hashing Network for Cross-Modal Retrieval
    Deng, Cheng
    Chen, Zhaojia
    Liu, Xianglong
    Gao, Xinbo
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (08) : 3893 - 3903
  • [8] Eitz M., 2010, P ACM SIGGRAPH TALKS, P26
  • [9] How Do Humans Sketch Objects?
    Eitz, Mathias
    Hays, James
    Alexa, Marc
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2012, 31 (04):
  • [10] Fu H., 2014, EUR WORKSH 3D OBJ RE, P121