UViT: Efficient and lightweight U-shaped hybrid vision transformer for human pose estimation

被引:0
作者
Li B. [1 ,2 ]
Tang S. [1 ]
Li W. [1 ,2 ]
机构
[1] School of Information and Control Engineering, China University of Mining and Technology, Xuzhou
[2] School of Mechanical and Electronic Engineering, Suzhou University, Suzhou
关键词
attention mechanism; context enhancement; lightweight network; multi-branch structure; Pose estimation;
D O I
10.3233/JIFS-231440
中图分类号
学科分类号
摘要
Pose estimation plays a crucial role in human-centered vision applications and has advanced significantly in recent years. However, prevailing approaches use extremely complex structural designs for obtaining high scores on the benchmark dataset, hampering edge device applications. In this study, an efficient and lightweight human pose estimation problem is investigated. Enhancements are made to the context enhancement module of the U-shaped structure to improve the multi-scale local modeling capability. With a transformer structure, a lightweight transformer block was designed to enhance the local feature extraction and global modeling ability. Finally, a lightweight pose estimation network-U-shaped Hybrid Vision Transformer, UViT-was developed. The minimal network UViT-T achieved a 3.9% improvement in AP scores on the COCO validation set with fewer model parameters and computational complexity compared with the best-performing V2 version of the MobileNet series. Specifically, with an input size of 384×288, UViT-T achieves an impressive AP score of 70.2 on the COCO test-dev set, with only 1.52 M parameters and 2.32 GFLOPs. The inference speed is approximately twice that of general-purpose networks. This study provides an efficient and lightweight design idea and method for the human pose estimation task and provides theoretical support for its deployment on edge devices. © 2024-IOS Press. All rights reserved.
引用
收藏
页码:8345 / 8359
页数:14
相关论文
共 50 条
[31]   FALNet: flow-based attention lightweight network for human pose estimation [J].
Xiao, Degui ;
Liu, Jiahui ;
Li, Jiazhi .
JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (05)
[32]   Dual-Path Transformer for 3D Human Pose Estimation [J].
Zhou, Lu ;
Chen, Yingying ;
Wang, Jinqiao .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) :3260-3270
[33]   Hybrid Refinement-Correction Heatmaps for Human Pose Estimation [J].
Kamel, Aouaidjia ;
Sheng, Bin ;
Li, Ping ;
Kim, Jinman ;
Feng, David Dagan .
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 :1330-1342
[34]   Knowledge-Embedded Transformer for 3-D Human Pose Estimation [J].
Chen, Shu ;
He, Ying .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
[35]   EfficientPose: Efficient human pose estimation with neural architecture search [J].
Zhang, Wenqiang ;
Fang, Jiemin ;
Wang, Xinggang ;
Liu, Wenyu .
COMPUTATIONAL VISUAL MEDIA, 2021, 7 (03) :335-347
[36]   EfficientPose: Efficient human pose estimation with neural architecture search [J].
Wenqiang Zhang ;
Jiemin Fang ;
Xinggang Wang ;
Wenyu Liu .
Computational Visual Media, 2021, 7 :335-347
[37]   VHR-BirdPose: Vision Transformer-Based HRNet for Bird Pose Estimation with Attention Mechanism [J].
He, Runang ;
Wang, Xiaomin ;
Chen, Huazhen ;
Liu, Chang .
ELECTRONICS, 2023, 12 (17)
[38]   HP-YOLO: A Lightweight Real-Time Human Pose Estimation Method [J].
Tu, Haiyan ;
Qiu, Zhengkun ;
Yang, Kang ;
Tan, Xiaoyue ;
Zheng, Xiujuan .
APPLIED SCIENCES-BASEL, 2025, 15 (06)
[39]   Lightweight Human Pose Estimation Based on Densely Guided Self-Knowledge Distillation [J].
Wu, Mingyue ;
Zhao, Zhong-Qiu ;
Li, Jiajun ;
Tian, Weidong .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II, 2023, 14255 :421-433
[40]   Pruning-guided feature distillation for an efficient transformer-based pose estimation model [J].
Kim, Dong-hwi ;
Lee, Dong-hun ;
Kim, Aro ;
Jeong, Jinwoo ;
Lee, Jong Taek ;
Kim, Sungjei ;
Park, Sang-hyo .
IET COMPUTER VISION, 2024, 18 (06) :745-758