3D2SeqViews: Aggregating Sequential Views for 3D Global Feature Learning by CNN With Hierarchical Attention Aggregation

被引:145
作者
Han, Zhizhong [1 ,2 ]
Lu, Honglei [1 ]
Liu, Zhenbao [3 ]
Vong, Chi-Man [4 ]
Liu, Yu-Shen [1 ,5 ]
Zwicker, Matthias [6 ]
Han, Junwei [7 ]
Chen, C. L. Philip [8 ]
机构
[1] Tsinghua Univ, Sch Software, Beijing, Peoples R China
[2] Univ Maryland, Dept Comp Sci, College Pk, MD 20737 USA
[3] Northwestern Polytech Univ, Sch Aeronaut, Xian 710072, Shaanxi, Peoples R China
[4] Univ Macau, Dept Comp & Informat Sci, Macau 99999, Peoples R China
[5] Beijing Natl Res Ctr Informat Sci & Technol, Beijing, Peoples R China
[6] Univ Maryland, College Pk, MD 20737 USA
[7] Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China
[8] Univ Macau, Fac Sci & Technol, Macau 99999, Peoples R China
基金
美国国家科学基金会; 中国国家自然科学基金; 国家重点研发计划;
关键词
3D global feature learning; view aggregation; sequential views; hierarchical attention aggregation; CNN; NETWORK;
D O I
10.1109/TIP.2019.2904460
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning 3D global features by aggregating multiple views is important. Pooling is widely used to aggregate views in deep learning models. However, pooling disregards a lot of content information within views and the spatial relationship among the views, which limits the discriminability of learned features. To resolve this issue, 3D to Sequential Views (3D2SeqViews) is proposed to more effectively aggregate the sequential views using convolutional neural networks with a novel hierarchical attention aggregation. Specifically, the content information within each view is first encoded. Then, the encoded view content information and the sequential spatiality among the views are simultaneously aggregated by the hierarchical attention aggregation, where view-level attention and class-level attention are proposed to hierarchically weight sequential views and shape classes. View-level attention is learned to indicate how much attention is paid to each view by each shape class, which subsequently weights sequential views through a novel recursive view integration. Recursive view integration learns the semantic meaning of view sequence, which is robust to the first view position. Furthermore, class-level attention is introduced to describe how much attention is paid to each shape class, which innovatively employs the discriminative ability of the fine-tuned network. 3D2SeqViews learns more discriminative features than the state-of-the-art, which leads to the outperforming results in shape classification and retrieval under three large-scale benchmarks.
引用
收藏
页码:3986 / 3999
页数:14
相关论文
共 43 条
[1]  
[Anonymous], 2015, ICLR
[2]  
[Anonymous], PROC CVPR IEEE
[3]  
[Anonymous], P BRIT MACH VIS C BM
[4]  
[Anonymous], EUR WORKSH 3D OBJ RE
[5]  
[Anonymous], 2017, P 10 EUR WORKSH 3D O
[6]  
[Anonymous], 2016, ADV NEURAL INFORM PR
[7]  
[Anonymous], P BRIT MACH VIS C
[8]  
[Anonymous], 2016, ACM T GRAPHIC
[9]  
[Anonymous], 2017, BMVC
[10]   GIFT: Towards Scalable 3D Shape Retrieval [J].
Bai, Song ;
Bai, Xiang ;
Zhou, Zhichao ;
Zhang, Zhaoxiang ;
Tian, Qi ;
Latecki, Longin Jan .
IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (06) :1257-1271