Against Opacity: Explainable AI and Large Language Models for Effective Digital Advertising

被引:2
|
作者
Yang, Qi [1 ]
Ongpin, Marlo [2 ]
Nikolenko, Sergey [1 ,3 ]
Huang, Alfred [2 ]
Farseev, Aleksandr [2 ]
机构
[1] ITMO Univ, St Petersburg, Russia
[2] SoMin Ai Res, Singapore, Singapore
[3] Steklov Inst Math, St Petersburg, Russia
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
基金
俄罗斯科学基金会;
关键词
Digital Advertising; Ads Performance Prediction; Deep Learning; Large Language Model; Explainable AI;
D O I
10.1145/3581783.3612817
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The opaqueness of modern digital advertising, exemplified by platforms such as Meta Ads, raises concerns regarding their autonomous control over audience targeting, pricing structures, and ad relevancy assessments. Locked in their leading positions by network effects, "Metas and Googles of the world" attract countless advertisers who rely on intuition, with billions of dollars lost on ineffective social media ads. The platforms' algorithms use huge amounts of data unavailable to advertisers, and the algorithms themselves are opaque as well. This lack of transparency hinders the advertisers' ability to make informed decisions and necessitates efforts to promote transparency, standardize industry metrics, and strengthen regulatory frameworks. In this work, we propose novel ways to assist marketers in optimizing their advertising strategies via machine learning techniques designed to analyze and evaluate content, in particular, predict the click-through rates (CTR) of novel advertising content. Another important problem is that large volumes of data available in the competitive landscape, e.g., competitors' ads, impede the ability of marketers to derive meaningful insights. This leads to a pressing need for a novel approach that would allow us to summarize and comprehend complex data. Inspired by the success of ChatGPT in bridging the gap between large language models (LLMs) and a broader non-technical audience, we propose a novel system that facilitates marketers in data interpretation, called SODA, that merges LLMs with explainable AI, enabling better human-AI collaboration with an emphasis on the domain of digital marketing and advertising. By combining LLMs and explainability features, in particular modern text-image models, we aim to improve the synergy between human marketers and AI systems.
引用
收藏
页码:9299 / 9305
页数:7
相关论文
共 45 条
  • [21] Multi-Agent Reasoning with Large Language Models for Effective Corporate Planning
    Tsao, Wen-Kwang
    2023 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE, CSCI 2023, 2023, : 365 - 370
  • [22] Optimizing Large Language Models: A Deep Dive into Effective Prompt Engineering Techniques
    Son, Minjun
    Won, Yun-Jae
    Lee, Sungjin
    APPLIED SCIENCES-BASEL, 2025, 15 (03):
  • [23] When geoscience meets generative AI and large language models: Foundations, trends, and future challenges
    Hadid, Abdenour
    Chakraborty, Tanujit
    Busby, Daniel
    EXPERT SYSTEMS, 2024, 41 (10)
  • [24] Fairness in AI-Driven Oncology: Investigating Racial and Gender Biases in Large Language Models
    Agrawal, Anjali
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2024, 16 (09)
  • [25] Exploring the benefits and challenges of AI-driven large language models in gastroenterology: Think out of the box
    Kral, Jan
    Hradis, Michal
    Buzga, Marek
    Kunovsky, Lumir
    BIOMEDICAL PAPERS-OLOMOUC, 2024, 168 (04): : 277 - 283
  • [26] Human-AI Synergy in Survey Development: Implications from Large Language Models in Business and Research
    Fan, Ke ping
    Chung, Ng ka
    ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS, 2025, 16 (01) : 24 - 39
  • [27] Understanding the Benefits and Challenges of Deploying Conversational AI Leveraging Large Language Models for Public Health Intervention
    Jo, Eunkyung
    Epstein, Daniel A.
    Jung, Hyunhoon
    Kim, Young -Ho
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2023), 2023,
  • [28] PreparedLLM: effective pre-pretraining framework for domain-specific large language models
    Chen, Zhou
    Lin, Ming
    Wang, Zimeng
    Zang, Mingrun
    Bai, Yuqi
    BIG EARTH DATA, 2024, 8 (04) : 649 - 672
  • [29] ChatTwin: Toward Automated Digital Twin Generation for Data Center via Large Language Models
    Li, Minghao
    Wang, Ruihang
    Zhou, Xin
    Zhu, Zhaomeng
    Wen, Yonggang
    Tan, Rui
    PROCEEDINGS OF THE 10TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, BUILDSYS 2023, 2023, : 208 - 211
  • [30] Effective test generation using pre-trained Large Language Models and mutation testing
    Dakhel, Arghavan Moradi
    Nikanjam, Amin
    Majdinasab, Vahid
    Khomh, Foutse
    Desmarais, Michel C.
    INFORMATION AND SOFTWARE TECHNOLOGY, 2024, 171