SACANet: end-to-end self-attention-based network for 3D clothing animation

被引:0
|
作者
Chen, Yunxi [1 ]
Cao, Yuanjie [1 ]
Fang, Fei [1 ,2 ]
Huang, Jin [1 ,2 ]
Hu, Xinrong [1 ]
He, Ruhan [1 ,2 ]
Zhang, Junjie [1 ]
机构
[1] Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
[2] Wuhan Text Univ, Hubei Prov Engn Res Ctr Intelligent Text & Fash, Wuhan, Peoples R China
关键词
3D clothing animation; Clothing wrinkles; Body-garment interpenetrations; Self-attention; Physics-constrained loss function; ALGORITHM;
D O I
10.1007/s00371-024-03633-7
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In real-time applications such as virtual try-on and video games, achieving realistic character clothing animation is an active research area. Existing learning-based methods aim to simulate realistic clothing effects while meeting real-time requirements. However, due to the diverse sources of clothing deformation, factors such as human body shape, pose, and clothing style all influence the realism of clothing deformation. Existing clothing animation methods often encounter issues such as body-garment interpenetrations and insufficient clothing wrinkles. In this paper, we propose a new end-to-end self-attention-based two-level network architecture for clothing animation, called SACANet, to address these issues. Specifically, we employ a physics-constrained loss function to enable our model to automatically resolve body-garment interpenetrations without the need for post-processing steps. Additionally, within our two-level network, we use an effective self-attention mechanism, namely cross-covariance attention, to enhance our model's ability to learn geometric features and to capture more realistic clothing deformations. The base deformation network learns the global deformation of the clothing by extracting global geometric features from the input data. The detail deformation network learns the local wrinkle details of the clothing by fusing features extracted by specific pose models related to human body shape and clothing style. Through a series of experiments, we comprehensively evaluate and compare the results. These experiments demonstrate the superiority of our method in simulating realistic clothing animation. https://github.com/JUNOHINATA/SACANet.
引用
收藏
页码:3829 / 3842
页数:14
相关论文
共 50 条
  • [1] Hash Self-Attention End-to-End Network for Sketch-Based 3D Shape Retrieval
    Zhao X.
    Pan X.
    Liu F.
    Zhang S.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2021, 33 (05): : 798 - 805
  • [2] On the localness modeling for the self-attention based end-to-end speech synthesis
    Yang, Shan
    Lu, Heng
    Kang, Shiyin
    Xue, Liumeng
    Xiao, Jinba
    Su, Dan
    Xie, Lei
    Yu, Dong
    NEURAL NETWORKS, 2020, 125 : 121 - 130
  • [3] END-TO-END NEURAL SPEAKER DIARIZATION WITH SELF-ATTENTION
    Fujita, Yusuke
    Kanda, Naoyuki
    Horiguchi, Shota
    Xue, Yawen
    Nagamatsu, Kenji
    Watanabe, Shinji
    2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), 2019, : 296 - 303
  • [4] An End-to-end Topic-Enhanced Self-Attention Network for Social Emotion Classification
    Wang, Chang
    Wang, Bang
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 2210 - 2219
  • [5] End-to-End ASR with Adaptive Span Self-Attention
    Chang, Xuankai
    Subramanian, Aswin Shanmugam
    Guo, Pengcheng
    Watanabe, Shinji
    Fujita, Yuya
    Omachi, Motoi
    INTERSPEECH 2020, 2020, : 3595 - 3599
  • [6] DensSiam: End-to-End Densely-Siamese Network with Self-Attention Model for Object Tracking
    Abdelpakey, Mohamed H.
    Shehata, Mohamed S.
    Mohamed, Mostafa M.
    ADVANCES IN VISUAL COMPUTING, ISVC 2018, 2018, 11241 : 463 - 473
  • [7] An End-to-End Blind Image Quality Assessment Method Using a Recurrent Network and Self-Attention
    Zhou, Mingliang
    Lan, Xuting
    Wei, Xuekai
    Liao, Xingran
    Mao, Qin
    Li, Yutong
    Wu, Chao
    Xiang, Tao
    Fang, Bin
    IEEE TRANSACTIONS ON BROADCASTING, 2023, 69 (02) : 369 - 377
  • [8] Efficient decoding self-attention for end-to-end speech synthesis
    Zhao, Wei
    Xu, Li
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2022, 23 (07) : 1127 - 1138
  • [9] Image Restoration Based on End-to-End Unrolled Network
    Tao, Xiaoping
    Zhou, Hao
    Chen, Yueting
    PHOTONICS, 2021, 8 (09)
  • [10] End-to-end infrared radiation sensing technique based on holography-guided visual attention network
    Zhai, Yingying
    Huang, Haochong
    Sun, Dexin
    Panezai, Spozmai
    Li, Zijian
    Qiu, Kunfeng
    Li, Mingxia
    Zheng, Zhiyuan
    Zhang, Zili
    OPTICS AND LASERS IN ENGINEERING, 2024, 178