Learning Geometric Information via Transformer Network for Key-Points Based Motion Segmentation

被引:0
作者
Li, Qiming [1 ]
Cheng, Jinghang [1 ]
Gao, Yin [1 ]
Li, Jun [1 ]
机构
[1] Chinese Acad Sci, Haixi Inst, Quanzhou Inst Equipment Mfg, Lab Robot & Intelligent Syst, Quanzhou 362216, Fujian, Peoples R China
基金
中国国家自然科学基金;
关键词
Geometric information embedding; transformer; self-attention; motion segmentation; VIDEO OBJECT SEGMENTATION; MULTIPLE-STRUCTURE DATA; CONSENSUS; TRACKING; GRAPHS;
D O I
10.1109/TCSVT.2024.3382363
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the emergence of Vision Transformers, attention-based modules have demonstrated comparable or superior performance in comparison to CNNs on various vision tasks. However, limited research has been conducted to explore the potential of the self-attention module in learning the global and local geometric information for key-points based motion segmentation. This paper thus presents a new method, named GIET, that utilizes geometric information in the Transformer network for key-points based motion segmentation. Specifically, two novel local geometric information embedding modules are developed in GIET. Unlike the traditional convolution operators which model the local geometric information of key-points within a fixed-size spatial neighbourhood, we develop a Neighbor Embedding Module (NEM) by aggregating the feature maps of k-Nearest Neighbors (k-NN) for each point according to the semantics similarity between the input key-points. NEM not only augments the network's ability of local feature extraction of the points' neighborhoods, but also characterizes the semantic affinities between points in the same moving object. Furthermore, to investigate the geometric relationships between the points and each motion, a Centroid Embedding Module (CEM) is devised to aggregate the feature maps of cluster centroids that correspond to the moving objects. CEM can effectively capture the semantic similarity between points and the centroids corresponding to the moving objects. Subsequently, the multi-head self-attention mechanism is exploited to learn the global geometric information of all the key-points using the aggregated feature maps obtained from the two embedding modules. Compared to the convolution operators or self-attention mechanism, the proposed simple Transformer-like architecture can optimally utilize both the local and global geometric properties of the input sparse key-points. Finally, the motion segmentation task is formulated as a subspace clustering problem using the Transformer architecture. The experimental results on three motion segmentation datasets, including KT3DMoSeg, AdelaideRMF, and FBMS, demonstrate that GIET achieves state-of-the-art performance.
引用
收藏
页码:7856 / 7869
页数:14
相关论文
共 50 条
[31]   TransGraphNet: A novel network for medical image segmentation based on transformer and graph convolution [J].
Zhang, Ju ;
Ye, Zhiyi ;
Chen, Mingyang ;
Yu, Jiahao ;
Cheng, Yun .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 104
[32]   TAGNet: A transformer-based axial guided network for bile duct segmentation [J].
Zhou, Guang-Quan ;
Zhao, Fuxing ;
Yang, Qing-Han ;
Wang, Kai-Ni ;
Li, Shengxiao ;
Zhou, Shoujun ;
Lu, Jian ;
Chen, Yang .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 86
[33]   Automatic Segmentation Algorithm of Dermoscopy Image Based on Transformer and Convolutional Neural Network [J].
Wei C. ;
Xu Y. ;
Jiang X. ;
Wei Y. .
Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2022, 34 (12) :1877-1886
[34]   Transformer-Based Cascade U-shaped Network for Action Segmentation [J].
Bao, Wenxia ;
Lin, An ;
Huang, Hua ;
Yang, Xianjun ;
Chen, Hemu .
2024 3RD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND MEDIA COMPUTING, ICIPMC 2024, 2024, :157-161
[35]   TransRender: a transformer-based boundary rendering segmentation network for stroke lesions [J].
Wu, Zelin ;
Zhang, Xueying ;
Li, Fenglian ;
Wang, Suzhe ;
Li, Jiaying .
FRONTIERS IN NEUROSCIENCE, 2023, 17
[36]   Enhancing Cardiac MRI Segmentation via Classifier-Guided Two-Stage Network and All-Slice Information Fusion Transformer [J].
Chen, Zihao ;
Chen, Xiao ;
Liu, Yikang ;
Chen, Eric Z. ;
Chen, Terrence ;
Sun, Shanhui .
APPLICATIONS OF MEDICAL ARTIFICIAL INTELLIGENCE, AMAI 2023, 2024, 14313 :145-154
[37]   Rock Particle Motion Information Detection Based on Video Instance Segmentation [J].
Chen, Man ;
Li, Maojun ;
Li, Yiwei ;
Yi, Wukun .
SENSORS, 2021, 21 (12)
[38]   A unified framework for unsupervised action learning via global-to-local motion transformer [J].
Kim, Boeun ;
Kim, Jungho ;
Chang, Hyung Jin ;
Oh, Tae-Hyun .
PATTERN RECOGNITION, 2025, 159
[39]   Transformer Based Multiple Instance Learning for Weakly Supervised Histopathology Image Segmentation [J].
Qian, Ziniu ;
Li, Kailu ;
Lai, Maode ;
Chang, Eric I-Chao ;
Wei, Bingzheng ;
Fan, Yubo ;
Xu, Yan .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT II, 2022, 13432 :160-170
[40]   Improving Rumor Detection by Promoting Information Campaigns With Transformer-Based Generative Adversarial Learning [J].
Ma, Jing ;
Li, Jun ;
Gao, Wei ;
Yang, Yang ;
Wong, Kam-Fai .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (03) :2657-2670