CTR-Driven Advertising Image Generation with Multimodal Large Language Models

被引:0
作者
Chen, Xingye [1 ]
Feng, Wei [2 ]
Du, Zhenbang [1 ]
Wang, Weizhen [2 ]
Chen, Yanyin [2 ]
Wang, Haohan [2 ]
Liu, Linkai [3 ]
Li, Yaoyu [2 ]
Zhao, Jinyuan [2 ]
Li, Yu [2 ]
Zhang, Zheng [2 ]
Lv, Jingjing [2 ]
Shen, Junjie [2 ]
Lin, Zhangang [2 ]
Shao, Jingping [2 ]
Shao, Yuanjie [1 ]
You, Xinge [1 ]
Gao, Changxin [1 ]
Sang, Nong [1 ]
机构
[1] Huazhong Univ Sci & Technol, Wuhan, Peoples R China
[2] JD COM, Beijing, Peoples R China
[3] Sun Yat Sen Univ, Shenzhen, Peoples R China
来源
PROCEEDINGS OF THE ACM WEB CONFERENCE 2025, WWW 2025 | 2025年
基金
国家重点研发计划;
关键词
CTR-Driven; Advertising Image Generation; Online Advertising; Multimodal Large Language Models;
D O I
10.1145/3696410.3714836
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In web data, advertising images are crucial for capturing user attention and improving advertising effectiveness. Most existing methods generate background for products primarily focus on the aesthetic quality, which may fail to achieve satisfactory online performance. To address this limitation, we explore the use of Multimodal Large Language Models (MLLMs) for generating advertising images by optimizing for Click-Through Rate (CTR) as the primary objective. Firstly, we build targeted pre-training tasks, and leverage a large-scale e-commerce multimodal dataset to equip MLLMs with initial capabilities for advertising image generation tasks. To further improve the CTR of generated images, we propose a novel reward model to fine-tune pre-trained MLLMs through Reinforcement Learning (RL), which can jointly utilize multimodal features and accurately reflect user click preferences. Meanwhile, a product-centric preference optimization strategy is developed to ensure that the generated background content aligns with the product characteristics after fine-tuning, enhancing the overall relevance and effectiveness of the advertising images. Extensive experiments have demonstrated that our method achieves state-of-the-art performance in both online and offline metrics. Our code and pre-trained models are publicly available at: https://github.com/Chenguoz/CAIG.
引用
收藏
页码:2262 / 2275
页数:14
相关论文
共 56 条
[1]  
2023, Arxiv, DOI arXiv:2303.08774
[2]  
Anthropic, 2024, Claude3.5-sonnet
[3]  
Bai Y, 2022, arXiv, DOI [10.48550/arxiv.2204.05862, DOI 10.48550/ARXIV.2204.05862]
[4]  
Chen Binghui, 2024, arXiv
[5]  
Chen JH, 2017, 2017 17TH INTERNATIONAL SYMPOSIUM ON COMMUNICATIONS AND INFORMATION TECHNOLOGIES (ISCIT)
[6]  
Chen J, 2021, Arxiv, DOI arXiv:2103.00436
[7]  
Chiang W.-L., 2023, Vicuna: an open-source chatbot impressing GPT-4 with 90 %* chatGPT quality
[8]  
Christiano PF, 2017, ADV NEUR IN, V30
[9]   Towards Reliable Advertising Image Generation Using Human Feedback [J].
Du, Zhenbang ;
Feng, Wei ;
Wang, Haohan ;
Li, Yaoyu ;
Wang, Jingsen ;
Li, Jian ;
Zhang, Zheng ;
Lv, Jingjing ;
Zhu, Xin ;
Jin, Junsheng ;
Shen, Junjie ;
Lin, Zhangang ;
Shao, Jingping .
COMPUTER VISION - ECCV 2024, PT XX, 2025, 15078 :399-415
[10]   Image Matters: Visually Modeling User Behaviors Using Advanced Model Server [J].
Ge, Tiezheng ;
Zhao, Liqin ;
Zhou, Guorui ;
Chen, Keyu ;
Liu, Shuying ;
Yi, Huimin ;
Hu, Zelin ;
Liu, Bochao ;
Sun, Peng ;
Liu, Haoyu ;
Yi, Pengtao ;
Huang, Sui ;
Zhang, Zhiqiang ;
Zhu, Xiaoqiang ;
Zhang, Yu ;
Gai, Kun .
CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, :2087-2095