Dynamic Style Generation of Clothing Based on Reinforcement Learning

被引:0
作者
Jiang Z. [1 ]
Qian J. [2 ]
机构
[1] Fashion Institute, Shaanxi Fashion Engineering University, Shaanxi, Xi'an
[2] College of Textile and Clothing, Xinjiang University, Urumqi
关键词
Clothing Design; Computer-Aided Design; Dynamic Style Generation of Clothing; Reinforcement Learning;
D O I
10.14733/cadaps.2024.S23.159-174
中图分类号
学科分类号
摘要
This article aims to establish and validate a dynamic model for clothing style generation in order to enhance rapid style innovation and personalized customization in the clothing design sector. This study utilizes a clothing model grounded in CAD (Computer Aided Design) technology, paired with an RL (Reinforcement Learning) algorithm for style generation. By compiling and analyzing a comprehensive dataset of clothing CAD information and style reference samples, a simulation environment is created for model training and evaluation. The findings reveal that, in comparison to conventional CAD design techniques and rule-based style generation methods, the dynamic clothing style generation model presented in this study exhibits superior style consistency, originality, and aesthetic appeal. This model is capable of producing tailored clothing designs based on specified design elements and style references, demonstrating high levels of flexibility and adaptability. In conclusion, this research introduces an innovative design tool and model for the clothing industry, poised to streamline design processes, minimize costs, and foster sustainable industry growth. © 2024 U-turn Press LLC.
引用
收藏
页码:159 / 174
页数:15
相关论文
共 50 条
[11]   Dynamic Multitarget Assignment Based on Deep Reinforcement Learning [J].
Wu, Yifei ;
Lei, Yonglin ;
Zhu, Zhi ;
Yang, Xiaochen ;
Li, Qun .
IEEE ACCESS, 2022, 10 :75998-76007
[12]   A dynamic route guidance arithmetic based on reinforcement learning [J].
Zhang, Z ;
Xu, JM .
PROCEEDINGS OF 2005 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-9, 2005, :3607-3611
[13]   Explicit Dynamic Coordination Reinforcement Learning Based on Utility [J].
Si, Huaiwei ;
Tan, Guozhen ;
Yuan, Yifu ;
Peng, Yanfei ;
Li, Jianping .
KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2022, 16 (03) :792-812
[14]   A Dynamic Individual Recommendation Method Based on Reinforcement Learning [J].
Han, Daojun ;
Shen, Xiajiong ;
Gan, Tian ;
Cai, Ruiqing .
PARALLEL ARCHITECTURE, ALGORITHM AND PROGRAMMING, PAAP 2017, 2017, 729 :192-200
[15]   Structured products dynamic hedging based on reinforcement learning [J].
Xu H. ;
Xu C. ;
Yan H. ;
Sun Y. .
Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (09) :12285-12295
[16]   A Dynamic Financial Knowledge Graph Based on Reinforcement Learning and Transfer Learning [J].
Miao, Rui ;
Zhang, Xia ;
Yan, Hongfei ;
Chen, Chong .
2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, :5370-5378
[17]   Personalized style recommendation via reinforcement learning [J].
Luo, Jiyun ;
Hazra, Kurchi Subhra ;
Huo, Wenyu ;
Li, Rui ;
Mahabal, Abhijit .
COMPANION OF THE WORLD WIDE WEB CONFERENCE, WWW 2023, 2023, :290-293
[18]   Dynamic Scene Generation and Animation Rendering integrating CAD Modeling and Reinforcement Learning [J].
Li H. ;
Su Y. .
Computer-Aided Design and Applications, 2024, 21 (S23) :144-158
[19]   Interval-based melody generation via reinforcement learning [J].
Liu, Mingzhi ;
Li, Jinlong ;
Wang, Yufei ;
Zhang, Xu ;
Sun, Boyi .
2022 8TH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING AND COMMUNICATIONS, BIGCOM, 2022, :181-189
[20]   Robot path planning in dynamic environment based on reinforcement learning [J].
庄晓东 ;
孟庆春 ;
魏天滨 ;
王旭柱 ;
谭锐 ;
李筱菁 .
Journal of Harbin Institute of Technology, 2001, (03) :253-255