SACANet: end-to-end self-attention-based network for 3D clothing animation

被引:0
作者
Chen, Yunxi [1 ]
Cao, Yuanjie [1 ]
Fang, Fei [1 ,2 ]
Huang, Jin [1 ,2 ]
Hu, Xinrong [1 ]
He, Ruhan [1 ,2 ]
Zhang, Junjie [1 ]
机构
[1] Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
[2] Wuhan Text Univ, Hubei Prov Engn Res Ctr Intelligent Text & Fash, Wuhan, Peoples R China
关键词
3D clothing animation; Clothing wrinkles; Body-garment interpenetrations; Self-attention; Physics-constrained loss function; ALGORITHM;
D O I
10.1007/s00371-024-03633-7
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In real-time applications such as virtual try-on and video games, achieving realistic character clothing animation is an active research area. Existing learning-based methods aim to simulate realistic clothing effects while meeting real-time requirements. However, due to the diverse sources of clothing deformation, factors such as human body shape, pose, and clothing style all influence the realism of clothing deformation. Existing clothing animation methods often encounter issues such as body-garment interpenetrations and insufficient clothing wrinkles. In this paper, we propose a new end-to-end self-attention-based two-level network architecture for clothing animation, called SACANet, to address these issues. Specifically, we employ a physics-constrained loss function to enable our model to automatically resolve body-garment interpenetrations without the need for post-processing steps. Additionally, within our two-level network, we use an effective self-attention mechanism, namely cross-covariance attention, to enhance our model's ability to learn geometric features and to capture more realistic clothing deformations. The base deformation network learns the global deformation of the clothing by extracting global geometric features from the input data. The detail deformation network learns the local wrinkle details of the clothing by fusing features extracted by specific pose models related to human body shape and clothing style. Through a series of experiments, we comprehensively evaluate and compare the results. These experiments demonstrate the superiority of our method in simulating realistic clothing animation. https://github.com/JUNOHINATA/SACANet.
引用
收藏
页码:3829 / 3842
页数:14
相关论文
共 47 条
[1]  
Ali A., 2021, Adv. Neural. Inf. Process. Syst, V34, P20014
[2]   A systematic review: Virtual-reality-based techniques for human exercises and health improvement [J].
Ali, Saba Ghazanfar ;
Wang, Xiangning ;
Li, Ping ;
Jung, Younhyun ;
Bi, Lei ;
Kim, Jinman ;
Chen, Yuting ;
Feng, David Dagan ;
Magnenat Thalmann, Nadia ;
Wang, Jihong ;
Sheng, Bin .
FRONTIERS IN PUBLIC HEALTH, 2023, 11
[3]   Efficient Body Motion Quantification and Similarity Evaluation Using 3-D Joints Skeleton Coordinates [J].
Aouaidjia, Kamel ;
Sheng, Bin ;
Li, Ping ;
Kim, Jinman ;
Feng, David Dagan .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2021, 51 (05) :2774-2788
[4]  
Baraff D., 1998, Computer Graphics. Proceedings. SIGGRAPH 98 Conference Proceedings, P43, DOI 10.1145/280814.280821
[5]   A Survey on Position-Based Simulation Methods in Computer Graphics [J].
Bender, Jan ;
Mueller, Matthias ;
Otaduy, Miguel A. ;
Teschner, Matthias ;
Macklin, Miles .
COMPUTER GRAPHICS FORUM, 2014, 33 (06) :228-251
[6]  
Bertiche H, 2021, Arxiv, DOI arXiv:2012.11310
[7]   CLOTH3D: Clothed 3D Humans [J].
Bertiche, Hugo ;
Madadi, Meysam ;
Escalera, Sergio .
COMPUTER VISION - ECCV 2020, PT XX, 2020, 12365 :344-359
[8]   Multi-Garment Net: Learning to Dress 3D People from Images [J].
Bhatnagar, Bharat Lal ;
Tiwari, Garvita ;
Theobalt, Christian ;
Pons-Moll, Gerard .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :5419-5429
[9]  
Bouaziz S, 2023, Seminal Graphics Papers: Pushing the Boundaries, V2, P787
[10]   Shape Transformers: Topology-Independent 3D Shape Models Using Transformers [J].
Chandran, Prashanth ;
Zoss, Gaspard ;
Gross, Markus ;
Gotardo, Paulo ;
Bradley, Derek .
COMPUTER GRAPHICS FORUM, 2022, 41 (02) :195-207