Single-Stage Multi-Human Parsing via Point Sets and Center-Based Offsets

被引:6
作者
Chu, Jiaming [1 ]
Jin, Lei [1 ]
Fan, Xiaojin [2 ]
Teng, Yinglei [1 ]
Wei, Yunchao [3 ]
Fang, Yuqiang [4 ]
Xing, Junliang [5 ]
Zhao, Jian [6 ,7 ,8 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing, Peoples R China
[2] Beijing Inst Technol, Beijing, Peoples R China
[3] Beijing Jiaotong Univ, Beijing, Peoples R China
[4] Space Engn Univ, Beijing, Peoples R China
[5] Tsinghua Univ, Beijing, Peoples R China
[6] Inst North Elect Equipment, Beijing, Peoples R China
[7] Intelligent Game & Decis Lab, Beijing, Peoples R China
[8] Peng Cheng Lab, Shenzhen, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
基金
国家重点研发计划;
关键词
neural networks; multi-human parsing; point sets; offsets;
D O I
10.1145/3581783.3611993
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work studies the multi-human parsing problem. Existing methods, either following top-down or bottom-up two-stage paradigms, usually involve expensive computational costs. We instead present a high-performance Single-stage Multi-human Parsing (SMP) deep architecture that decouples the multi-human parsing problem into two fine-grained sub-problems, i.e., locating the human body and parts. SMP leverages the point features in the barycenter positions to obtain their segmentation and then generates a series of offsets from the barycenter of the human body to the barycenters of parts, thus performing human body and parts matching without the grouping process. Within the SMP architecture, we propose a Refined Feature Retain module to extract the global feature of instances through generated mask attention and a Mask of Interest Reclassify module as a trainable plug-in module to refine the classification results with the predicted segmentation. Extensive experiments on the MHPv2.0 dataset demonstrate the best effectiveness and efficiency of the proposed method, surpassing the state-of-the-art method by 2.1% in AP(50)(p), 1.0% in AP(vol)(p), and 1.2% in PCP50. Moreover, SMP also achieves superior performance in DensePose-COCO, verifying generalization of the model. In particular, the proposed method requires fewer training epochs and a less complex model architecture. Our codes are released in https://github.com/cjm-sfw/SMP.
引用
收藏
页码:1863 / 1873
页数:11
相关论文
共 56 条
[1]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00904
[2]  
[Anonymous], 2021, CVPR, DOI DOI 10.1109/CVPR46437.2021.01444
[3]  
Bolya Daniel, 2019, ICCV
[4]  
Cao Gaofeng, 2022, IJCAI
[5]  
Chen K., 2019, arXiv:1906.07155
[6]  
Chen Z, 2022, IJCAI, P1
[7]   Instance-Sensitive Fully Convolutional Networks [J].
Dai, Jifeng ;
He, Kaiming ;
Li, Yi ;
Ren, Shaoqing ;
Sun, Jian .
COMPUTER VISION - ECCV 2016, PT VI, 2016, 9910 :534-549
[8]  
Dong H., 2022, IJCAI
[9]  
Dong Zhangfu, 2022, IJCAI
[10]  
Girshick Ross, 2015, ARXIV COMPUTER VISIO