HAIGEN: Towards Human-AI Collaboration for Facilitating Creativity and Style Generation in Fashion Design

被引:4
作者
Jiang, Jianan [1 ]
Wu, Di [1 ]
Deng, Hanhui [1 ]
Long, Yidan [2 ]
Tang, Wenyi [2 ]
Li, Xiang [2 ]
Liu, Can [3 ]
Jin, Zhanpeng [4 ]
Zhang, Wenlei
Qi, Tangquan [5 ]
机构
[1] Hunan Univ, Data Intelligence & Serv Collaborat DISCO Lab, Changsha, Peoples R China
[2] Hunan Normal Univ, Coll Engn & Design, Changsha, Peoples R China
[3] City Univ Hong Kong, Sch Creat Media, Hong Kong, Peoples R China
[4] South China Univ Technol, Sch Future Technol, Guangzhou, Peoples R China
[5] Wondershare Technol, Shenzhen, Peoples R China
来源
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT | 2024年 / 8卷 / 03期
基金
中国国家自然科学基金;
关键词
Human-AI Collaboration; Generative Artificial Intelligence; Personalized Fashion Design;
D O I
10.1145/3678518
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The process of fashion design usually involves sketching, refining, and coloring, with designers drawing inspiration from various images to fuel their creative endeavors. However, conventional image search methods often yield irrelevant results, impeding the design process. Moreover, creating and coloring sketches can be time-consuming and demanding, acting as a bottleneck in the design workflow. In this work, we introduce HAIGEN (Human H uman-AI AI Collaboration for GEN eration), an efficient fashion design system for Human-AI collaboration developed to aid designers. Specifically, HAIGEN consists of four modules. T2IM, located in the cloud, generates reference inspiration images directly from text prompts. With three other modules situated locally, the I2SM batch generates the image material library into a certain designer-style sketch material library. The SRM recommends similar sketches in the generated library to designers for further refinement, and the STM colors the refined sketch according to the styles of inspiration images. Through our system, any designer can perform local personalized fine-tuning and leverage the powerful generation capabilities of large models in the cloud, streamlining the entire design development process. Given that our approach integrates both cloud and local model deployment schemes, it effectively safeguards design privacy by avoiding the need to upload personalized data from local designers. We validated the effectiveness of each module through extensive qualitative and quantitative experiments. User surveys also confirmed that HAIGEN offers significant advantages in design efficiency, positioning it as a new generation of aid-tool for designers.
引用
收藏
页数:27
相关论文
共 63 条
[1]   DoodleFormer: Creative Sketch Drawing with Transformers [J].
Bhunia, Ankan Kumar ;
Khan, Salman ;
Cholakkal, Hisham ;
Anwer, Rao Muhammad ;
Khan, Fahad Shahbaz ;
Laaksonen, Jorma ;
Felsberg, Michael .
COMPUTER VISION - ECCV 2022, PT XVII, 2022, 13677 :338-355
[3]   CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification [J].
Chen, Chun-Fu ;
Fan, Quanfu ;
Panda, Rameswar .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :347-356
[4]   AI-to-Human Actuation: Boosting Unmodified AI's Robustness by Proactively Inducing Favorable Human Sensing Conditions [J].
Cho, Sungjae ;
Kim, Yoonsu ;
Jang, Jaewoong ;
Hwang, Inseok .
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2023, 7 (01)
[5]   CrossGAI: A Cross-Device Generative AI Framework for Collaborative Fashion Design [J].
Deng, Hanhui ;
Jiang, Jianan ;
Yu, Zhiwang ;
Ouyang, Jinhui ;
Wu, Di .
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2024, 8 (01)
[6]   DiffusionRig: Learning Personalized Priors for Facial Appearance Editing [J].
Ding, Zheng ;
Zhang, Xuaner ;
Xia, Zhihao ;
Jebe, Lars ;
Tu, Zhuowen ;
Zhang, Xiuming .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :12736-12746
[7]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
[8]  
Figueiredo Mayara Costa, 2024, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, V7, P1
[9]  
Ge S., 2020, arXiv
[10]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672