Recommending Themes for Ad Creative Design via Visual-Linguistic Representations

被引:9
作者
Zhou, Yichao [1 ]
Mishra, Shaunak [2 ]
Verma, Manisha [2 ]
Bhamidipati, Narayan [2 ]
Wang, Wei [1 ]
机构
[1] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
[2] Yahoo Res, Sunnyvale, CA USA
来源
WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020) | 2020年
关键词
Online advertising; transformers; visual-linguistic representation;
D O I
10.1145/3366423.3380001
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
There is a perennial need in the online advertising industry to refresh ad creatives, i.e., images and text used for enticing online users towards a brand. Such refreshes are required to reduce the likelihood of ad fatigue among online users, and to incorporate insights from other successful campaigns in related product categories. Given a brand, to come up with themes for a new ad is a painstaking and time consuming process for creative strategists. Strategists typically draw inspiration from the images and text used for past ad campaigns, as well as world knowledge on the brands. To automatically infer ad themes via such multimodal sources of information in past ad campaigns, we propose a theme (keyphrase) recommender system for ad creative strategists. The theme recommender is based on aggregating results from a visual question answering (VQA) task, which ingests the following: (i) ad images, (ii) text associated with the ads as well as Wikipedia pages on the brands in the ads, and (iii) questions around the ad. We leverage transformer based cross-modality encoders to train visual-linguistic representations for our VQA task. We study two formulations for the VQA task along the lines of classification and ranking; via experiments on a public dataset, we show that cross-modal representations lead to significantly better classification accuracy and ranking precision-recall metrics. Cross-modal representations show better performance compared to separate image and text representations. In addition, the use of multimodal information shows a significant lift over using only textual or visual information.
引用
收藏
页码:2521 / 2527
页数:7
相关论文
共 25 条
[1]  
[Anonymous], 2019, Tech. rep.
[2]  
[Anonymous], 2018, P ACL
[3]   VQA: Visual Question Answering [J].
Antol, Stanislaw ;
Agrawal, Aishwarya ;
Lu, Jiasen ;
Mitchell, Margaret ;
Batra, Dhruv ;
Zitnick, C. Lawrence ;
Parikh, Devi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2425-2433
[4]  
Bhamidipati Narayan, 2017, CIKM 2017
[5]  
Boudin Florian, 2016, P COLING 2016 26 INT
[6]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[7]  
Florescu Corina, 2017, P 55 ANN M ASS COMP, V1
[8]  
Guo Jiafeng, 2016, P 25 ACM INT C INF K
[9]  
Han J, 2012, MOR KAUF D, P1
[10]  
Hussain Zaeem, 2017, CVPR