Against Opacity: Explainable AI and Large Language Models for Effective Digital Advertising

被引:2
|
作者
Yang, Qi [1 ]
Ongpin, Marlo [2 ]
Nikolenko, Sergey [1 ,3 ]
Huang, Alfred [2 ]
Farseev, Aleksandr [2 ]
机构
[1] ITMO Univ, St Petersburg, Russia
[2] SoMin Ai Res, Singapore, Singapore
[3] Steklov Inst Math, St Petersburg, Russia
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
基金
俄罗斯科学基金会;
关键词
Digital Advertising; Ads Performance Prediction; Deep Learning; Large Language Model; Explainable AI;
D O I
10.1145/3581783.3612817
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The opaqueness of modern digital advertising, exemplified by platforms such as Meta Ads, raises concerns regarding their autonomous control over audience targeting, pricing structures, and ad relevancy assessments. Locked in their leading positions by network effects, "Metas and Googles of the world" attract countless advertisers who rely on intuition, with billions of dollars lost on ineffective social media ads. The platforms' algorithms use huge amounts of data unavailable to advertisers, and the algorithms themselves are opaque as well. This lack of transparency hinders the advertisers' ability to make informed decisions and necessitates efforts to promote transparency, standardize industry metrics, and strengthen regulatory frameworks. In this work, we propose novel ways to assist marketers in optimizing their advertising strategies via machine learning techniques designed to analyze and evaluate content, in particular, predict the click-through rates (CTR) of novel advertising content. Another important problem is that large volumes of data available in the competitive landscape, e.g., competitors' ads, impede the ability of marketers to derive meaningful insights. This leads to a pressing need for a novel approach that would allow us to summarize and comprehend complex data. Inspired by the success of ChatGPT in bridging the gap between large language models (LLMs) and a broader non-technical audience, we propose a novel system that facilitates marketers in data interpretation, called SODA, that merges LLMs with explainable AI, enabling better human-AI collaboration with an emphasis on the domain of digital marketing and advertising. By combining LLMs and explainability features, in particular modern text-image models, we aim to improve the synergy between human marketers and AI systems.
引用
收藏
页码:9299 / 9305
页数:7
相关论文
共 45 条
  • [1] Effective depression detection and interpretation: Integrating machine learning, deep learning, language models, and explainable AI
    Al Masud, Gazi Hasan
    Shanto, Rejaul Islam
    Sakin, Ishmam
    Kabir, Muhammad Rafsan
    ARRAY, 2025, 25
  • [2] Detecting Homophobic Speech in Soccer Tweets Using Large Language Models and Explainable AI
    Santos, Guto Leoni
    dos Santos, Vitor Gaboardi
    Kearns, Colm
    Sinclair, Gary
    Black, Jack
    Doidge, Mark
    Fletcher, Thomas
    Kilvington, Dan
    Liston, Katie
    Endo, Patricia Takako
    Lynn, Theo
    SOCIAL NETWORKS ANALYSIS AND MINING, ASONAM 2024, PT I, 2025, 15211 : 489 - 504
  • [3] Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health
    Wen, Bo
    Norel, Raquel
    Liu, Julia
    Stappenbeck, Thaddeus
    Zulkernine, Farhana
    Chen, Huamin
    2024 IEEE INTERNATIONAL CONFERENCE ON DIGITAL HEALTH, ICDH 2024, 2024, : 104 - 113
  • [4] Adaptive and Explainable Margin Trading via Large Language Models on Portfolio Management
    Gu, Jingyi
    Ye, Junyi
    Wang, Guiling
    Yin, Wenpeng
    5TH ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, ICAIF 2024, 2024, : 248 - 256
  • [5] A Tool for Explainable Pension Fund Recommendations using Large Language Models
    da Silva, Eduardo Alves
    Marinho, Leandro Balby
    de Moura, Edleno Silva
    da Silva, Altigran Soares
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 1184 - 1186
  • [6] Commonsense Reasoning and Explainable Artificial Intelligence Using Large Language Models
    Krause, Stefanie
    Stolzenburg, Frieder
    ARTIFICIAL INTELLIGENCE-ECAI 2023 INTERNATIONAL WORKSHOPS, PT 1, XAI3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, 2023, 2024, 1947 : 302 - 319
  • [7] Explainable Fault Diagnosis of Control Systems Using Large Language Models
    Ojuolape, Adewumi Emmanuel
    Hu, Shanfeng
    2024 IEEE CONFERENCE ON CONTROL TECHNOLOGY AND APPLICATIONS, CCTA 2024, 2024, : 491 - 498
  • [8] Global reconstruction of language models with linguistic rules - Explainable AI for online consumer reviews
    Binder, Markus
    Heinrich, Bernd
    Hopf, Marcus
    Schiller, Alexander
    ELECTRONIC MARKETS, 2022, 32 (04) : 2123 - 2138
  • [9] Global reconstruction of language models with linguistic rules – Explainable AI for online consumer reviews
    Markus Binder
    Bernd Heinrich
    Marcus Hopf
    Alexander Schiller
    Electronic Markets, 2022, 32 : 2123 - 2138
  • [10] The impact of large language models on radiology: a guide for radiologists on the latest innovations in AI
    Nakaura, Takeshi
    Ito, Rintaro
    Ueda, Daiju
    Nozaki, Taiki
    Fushimi, Yasutaka
    Matsui, Yusuke
    Yanagawa, Masahiro
    Yamada, Akira
    Tsuboyama, Takahiro
    Fujima, Noriyuki
    Tatsugami, Fuminari
    Hirata, Kenji
    Fujita, Shohei
    Kamagata, Koji
    Fujioka, Tomoyuki
    Kawamura, Mariko
    Naganawa, Shinji
    JAPANESE JOURNAL OF RADIOLOGY, 2024, 42 (07) : 685 - 696