Against Opacity: Explainable AI and Large Language Models for Effective Digital Advertising

被引:2
|
作者
Yang, Qi [1 ]
Ongpin, Marlo [2 ]
Nikolenko, Sergey [1 ,3 ]
Huang, Alfred [2 ]
Farseev, Aleksandr [2 ]
机构
[1] ITMO Univ, St Petersburg, Russia
[2] SoMin Ai Res, Singapore, Singapore
[3] Steklov Inst Math, St Petersburg, Russia
基金
俄罗斯科学基金会;
关键词
Digital Advertising; Ads Performance Prediction; Deep Learning; Large Language Model; Explainable AI;
D O I
10.1145/3581783.3612817
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The opaqueness of modern digital advertising, exemplified by platforms such as Meta Ads, raises concerns regarding their autonomous control over audience targeting, pricing structures, and ad relevancy assessments. Locked in their leading positions by network effects, "Metas and Googles of the world" attract countless advertisers who rely on intuition, with billions of dollars lost on ineffective social media ads. The platforms' algorithms use huge amounts of data unavailable to advertisers, and the algorithms themselves are opaque as well. This lack of transparency hinders the advertisers' ability to make informed decisions and necessitates efforts to promote transparency, standardize industry metrics, and strengthen regulatory frameworks. In this work, we propose novel ways to assist marketers in optimizing their advertising strategies via machine learning techniques designed to analyze and evaluate content, in particular, predict the click-through rates (CTR) of novel advertising content. Another important problem is that large volumes of data available in the competitive landscape, e.g., competitors' ads, impede the ability of marketers to derive meaningful insights. This leads to a pressing need for a novel approach that would allow us to summarize and comprehend complex data. Inspired by the success of ChatGPT in bridging the gap between large language models (LLMs) and a broader non-technical audience, we propose a novel system that facilitates marketers in data interpretation, called SODA, that merges LLMs with explainable AI, enabling better human-AI collaboration with an emphasis on the domain of digital marketing and advertising. By combining LLMs and explainability features, in particular modern text-image models, we aim to improve the synergy between human marketers and AI systems.
引用
收藏
页码:9299 / 9305
页数:7
相关论文
共 50 条
  • [31] The debate over understanding in AI?s large language models
    Mitchell, Melanie
    Krakauer, David C.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2023, 120 (13)
  • [32] Pediatric Ophthalmology and Large Language Models: AI Has Arrived
    Wagner, Rudolph S.
    JOURNAL OF PEDIATRIC OPHTHALMOLOGY & STRABISMUS, 2024, 61 (02) : 80 - 80
  • [33] Connecting AI: Merging Large Language Models and Knowledge Graph
    Jovanovic, Mladan
    Campbell, Mark
    COMPUTER, 2023, 56 (11) : 103 - 108
  • [34] Using Large Language Models to Compare Explainable Models for Smart Home Human Activity Recognition
    Fiori, Michele
    Civitarese, Gabriele
    Bettini, Claudio
    COMPANION OF THE 2024 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, UBICOMP COMPANION 2024, 2024, : 881 - 884
  • [35] Balanced and Explainable Social Media Analysis for Public Health with Large Language Models
    Jiang, Yan
    Qiu, Ruihong
    Zhang, Yi
    Zhang, Peng-Fei
    DATABASES THEORY AND APPLICATIONS, ADC 2023, 2024, 14386 : 73 - 86
  • [36] Instructing and Prompting Large Language Models for Explainable Cross-domain Recommendations
    Petruzzelli, Alessandro
    Musto, Cataldo
    Laraspata, Lucrezia
    Rinaldi, Ivan
    de Gemmis, Marco
    Lops, Pasquale
    Semeraro, Giovanni
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 298 - 308
  • [37] How to write effective prompts for large language models
    Lin, Zhicheng
    NATURE HUMAN BEHAVIOUR, 2024, 8 (4) : 611 - 615
  • [38] How to write effective prompts for large language models
    Zhicheng Lin
    Nature Human Behaviour, 2024, 8 : 611 - 615
  • [39] Cost-effective Distillation of Large Language Models
    Dasgupta, Sayantan
    Cohn, Trevor
    Baldwin, Timothy
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 7346 - 7354
  • [40] Leveraging Large Language Models for Effective Organizational Navigation
    Chandrasekar, Haresh
    Gupta, Srishti
    Liu, Chun-Tzu
    Tsai, Chun-Hua
    PROCEEDINGS OF THE 25TH ANNUAL INTERNATIONAL CONFERENCE ON DIGITAL GOVERNMENT RESEARCH, DGO 2024, 2024, : 1020 - 1022