3D mesh transformer: A hierarchical neural network with local shape tokens

被引:3
作者
Chen, Yu [1 ]
Zhao, Jieyu [1 ]
Huang, Lingfeng [1 ]
Chen, Hao [1 ]
机构
[1] Ningbo Univ, Fac Elect Engn & Comp Sci, Ningbo 315000, Peoples R China
基金
中国国家自然科学基金;
关键词
self-attention networks; 3D mesh Transformer; polynomial fitting; surface subdivision; multilayer Transformer;
D O I
10.1016/j.neucom.2022.09.138
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-attention networks have revolutionized Natural Language Processing (NLP) and are making impres-sive strides in image analysis tasks such as image classification and object detection. Inspired by this suc-cess, we specifically design a novel self-attention mechanism between local shapes and build a shape Transformer. We split the 3D mesh model into shape patches, which we call shape tokens, and provide polynomial fitting representations of these patches as input to the shape Transformer. The shape token encodes local geometric information and resembles the token (word) status in NLP. The simplification of the mesh model provides a hierarchical multiresolution structure, which allows us to realize the fea-ture learning of a multilayer Transformer. We set high-level features formed by the shape Transformer as visual tokens and propose a vector-type self-attention mechanism to construct a 3D visual Transformer. Finally, we realized a hierarchical network structure based on local shape tokens and high-level visual tokens. Experiments show that our fusion network of 3D shape Transformer with explicit local shape con-text augmentation and 3D visual Transformer with multi-level structural feature learning achieves excel-lent performance on shape classification and part segmentation tasks.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:328 / 340
页数:13
相关论文
共 1 条
  • [1] An adjustable polygon connecting method for 3D mesh refinement
    Wang, Yinghui
    Hao, Wen
    Ning, Xiaojuan
    Shi, Zhenghao
    Zhao, Minghua
    2014 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV2014), 2014, : 202 - 207