Learning Geometric Information via Transformer Network for Key-Points Based Motion Segmentation

被引:1
作者
Li, Qiming [1 ]
Cheng, Jinghang [1 ]
Gao, Yin [1 ]
Li, Jun [1 ]
机构
[1] Chinese Acad Sci, Haixi Inst, Quanzhou Inst Equipment Mfg, Lab Robot & Intelligent Syst, Quanzhou 362216, Fujian, Peoples R China
基金
中国国家自然科学基金;
关键词
Geometric information embedding; transformer; self-attention; motion segmentation; VIDEO OBJECT SEGMENTATION; MULTIPLE-STRUCTURE DATA; CONSENSUS; TRACKING; GRAPHS;
D O I
10.1109/TCSVT.2024.3382363
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the emergence of Vision Transformers, attention-based modules have demonstrated comparable or superior performance in comparison to CNNs on various vision tasks. However, limited research has been conducted to explore the potential of the self-attention module in learning the global and local geometric information for key-points based motion segmentation. This paper thus presents a new method, named GIET, that utilizes geometric information in the Transformer network for key-points based motion segmentation. Specifically, two novel local geometric information embedding modules are developed in GIET. Unlike the traditional convolution operators which model the local geometric information of key-points within a fixed-size spatial neighbourhood, we develop a Neighbor Embedding Module (NEM) by aggregating the feature maps of k-Nearest Neighbors (k-NN) for each point according to the semantics similarity between the input key-points. NEM not only augments the network's ability of local feature extraction of the points' neighborhoods, but also characterizes the semantic affinities between points in the same moving object. Furthermore, to investigate the geometric relationships between the points and each motion, a Centroid Embedding Module (CEM) is devised to aggregate the feature maps of cluster centroids that correspond to the moving objects. CEM can effectively capture the semantic similarity between points and the centroids corresponding to the moving objects. Subsequently, the multi-head self-attention mechanism is exploited to learn the global geometric information of all the key-points using the aggregated feature maps obtained from the two embedding modules. Compared to the convolution operators or self-attention mechanism, the proposed simple Transformer-like architecture can optimally utilize both the local and global geometric properties of the input sparse key-points. Finally, the motion segmentation task is formulated as a subspace clustering problem using the Transformer architecture. The experimental results on three motion segmentation datasets, including KT3DMoSeg, AdelaideRMF, and FBMS, demonstrate that GIET achieves state-of-the-art performance.
引用
收藏
页码:7856 / 7869
页数:14
相关论文
共 50 条
[21]   A TRANSFORMER-BASED MOTION DEBLURRING NETWORK FOR UAV IMAGES [J].
Li, Rui ;
Zhao, Xiaowei .
2024 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2024), 2024, :6572-6575
[22]   Application of a parallel branches network based on Transformer for skin melanoma segmentation [J].
Yi S. ;
Zhang G. ;
He J. .
Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering, 2022, 39 (05) :937-944
[23]   A transformer-based generative adversarial network for brain tumor segmentation [J].
Huang, Liqun ;
Zhu, Enjun ;
Chen, Long ;
Wang, Zhaoyang ;
Chai, Senchun ;
Zhang, Baihai .
FRONTIERS IN NEUROSCIENCE, 2022, 16
[24]   Object segmentation and key-pose based summarization for motion video [J].
Zhiqiang Tian ;
Jianru Xue ;
Xuguang Lan ;
Ce Li ;
Nanning Zheng .
Multimedia Tools and Applications, 2014, 72 :1773-1802
[25]   Brain tumor image segmentation based on prior knowledge via transformer [J].
Li, Qiang ;
Liu, Hengxin ;
Nie, Weizhi ;
Wu, Ting .
INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2023, 33 (06) :2073-2087
[26]   Object segmentation and key-pose based summarization for motion video [J].
Tian, Zhiqiang ;
Xue, Jianru ;
Lan, Xuguang ;
Li, Ce ;
Zheng, Nanning .
MULTIMEDIA TOOLS AND APPLICATIONS, 2014, 72 (02) :1773-1802
[27]   Change Detection Network Based on Transformer and Transfer Learning [J].
Li, Hua ;
Li, Jingyu ;
Luo, Guanghao ;
Zhou, Liang ;
Wu, Hao ;
Yin, Zhangcai .
IEEE ACCESS, 2025, 13 :101131-101142
[28]   DRFormer: Learning Disentangled Representation for Pan-Sharpening via Mutual Information- Based Transformer [J].
Zhang, Feng ;
Zhang, Kai ;
Sun, Jiande ;
Wang, Jian ;
Bruzzone, Lorenzo .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 :1-15
[29]   A Novel Deep Learning Model for Medical Image Segmentation with Convolutional Neural Network and Transformer [J].
Zhang, Zhuo ;
Wu, Hongbing ;
Zhao, Huan ;
Shi, Yicheng ;
Wang, Jifang ;
Bai, Hua ;
Sun, Baoshan .
INTERDISCIPLINARY SCIENCES-COMPUTATIONAL LIFE SCIENCES, 2023, 15 (04) :663-677
[30]   A Novel Deep Learning Model for Medical Image Segmentation with Convolutional Neural Network and Transformer [J].
Zhuo Zhang ;
Hongbing Wu ;
Huan Zhao ;
Yicheng Shi ;
Jifang Wang ;
Hua Bai ;
Baoshan Sun .
Interdisciplinary Sciences: Computational Life Sciences, 2023, 15 :663-677