Multi-View Saliency Guided Deep Neural Network for 3-D Object Retrieval and Classification

被引:56
作者
Zhou, He-Yu [1 ]
Liu, An-An [1 ]
Nie, Wei-Zhi [1 ]
Nie, Jie [2 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Ocean Univ China, Coll Informat Sci & Engn, Qingdao 266100, Peoples R China
关键词
Three-dimensional displays; Solid modeling; Visualization; Cameras; Feature extraction; Computational modeling; Shape; 3D object retrieval; 3D object classification; multi-view learning; saliency analysis; 3D MODEL RETRIEVAL; DESCRIPTORS; RECOGNITION;
D O I
10.1109/TMM.2019.2943740
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose the multi-view saliency guided deep neural network (MVSG-DNN) for 3D object retrieval and classification. This method mainly consists of three key modules. First, the module of model projection rendering is employed to capture the multiple views of one 3D object. Second, the module of visual context learning applies the basic Convolutional Neural Networks for visual feature extraction of individual views and then employs the saliency LSTM to adaptively select the representative views based on multi-view context. Finally, with these information, the module of multi-view representation learning can generate the compile 3D object descriptors with the designed classification LSTM for 3D object retrieval and classification. The proposed MVSG-DNN has two main contributions: 1) It can jointly realize the selection of representative views and the similarity measure by fully exploiting multi-view context; 2) It can discover the discriminative structure of multi-view sequence without constraints of specific camera settings. Consequently, it can support flexible 3D object retrieval and classification for real applications by avoiding the required camera settings. Extensive comparison experiments on ModelNet10, ModelNet40, and ShapeNetCore55 demonstrate the superiority of MVSG-DNN against the state-of-art methods.
引用
收藏
页码:1496 / 1506
页数:11
相关论文
共 47 条
  • [1] Abdul-Rashid H., 2018, P EUR WORKSH 3D OBJ, P37
  • [2] [Anonymous], 2018, P EUR WORKSH 3D OBJ
  • [3] [Anonymous], 2015, P COMP VIS PATT REC
  • [4] [Anonymous], 2017, P EUR WORKSH 3D OBJ
  • [5] [Anonymous], 2017, P EUR WORKSH 3D OBJ
  • [6] [Anonymous], ARXIV171110108
  • [7] A Bayesian 3-D search engine using adaptive views clustering
    Ansary, Tarik Filali
    Daoudi, Mohamed
    Vandeborre, Jean-Philippe
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2007, 9 (01) : 78 - 88
  • [8] GIFT: A Real-time and Scalable 3D Shape Search Engine
    Bai, Song
    Bai, Xiang
    Zhou, Zhichao
    Zhang, Zhaoxiang
    Latecki, Longin Jan
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5023 - 5032
  • [9] On visual similarity based 3D model retrieval
    Chen, DY
    Tian, XP
    Shen, YT
    Ming, OY
    [J]. COMPUTER GRAPHICS FORUM, 2003, 22 (03) : 223 - 232
  • [10] General-Purpose Deep Point Cloud Feature Extractor
    Dominguez, Miguel
    Dhamdhere, Rohan
    Petkar, Atir
    Jain, Saloni
    Sah, Shagan
    Ptucha, Raymond
    [J]. 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 1972 - 1981