Against Opacity: Explainable AI and Large Language Models for Effective Digital Advertising

被引:2
|
作者
Yang, Qi [1 ]
Ongpin, Marlo [2 ]
Nikolenko, Sergey [1 ,3 ]
Huang, Alfred [2 ]
Farseev, Aleksandr [2 ]
机构
[1] ITMO Univ, St Petersburg, Russia
[2] SoMin Ai Res, Singapore, Singapore
[3] Steklov Inst Math, St Petersburg, Russia
基金
俄罗斯科学基金会;
关键词
Digital Advertising; Ads Performance Prediction; Deep Learning; Large Language Model; Explainable AI;
D O I
10.1145/3581783.3612817
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The opaqueness of modern digital advertising, exemplified by platforms such as Meta Ads, raises concerns regarding their autonomous control over audience targeting, pricing structures, and ad relevancy assessments. Locked in their leading positions by network effects, "Metas and Googles of the world" attract countless advertisers who rely on intuition, with billions of dollars lost on ineffective social media ads. The platforms' algorithms use huge amounts of data unavailable to advertisers, and the algorithms themselves are opaque as well. This lack of transparency hinders the advertisers' ability to make informed decisions and necessitates efforts to promote transparency, standardize industry metrics, and strengthen regulatory frameworks. In this work, we propose novel ways to assist marketers in optimizing their advertising strategies via machine learning techniques designed to analyze and evaluate content, in particular, predict the click-through rates (CTR) of novel advertising content. Another important problem is that large volumes of data available in the competitive landscape, e.g., competitors' ads, impede the ability of marketers to derive meaningful insights. This leads to a pressing need for a novel approach that would allow us to summarize and comprehend complex data. Inspired by the success of ChatGPT in bridging the gap between large language models (LLMs) and a broader non-technical audience, we propose a novel system that facilitates marketers in data interpretation, called SODA, that merges LLMs with explainable AI, enabling better human-AI collaboration with an emphasis on the domain of digital marketing and advertising. By combining LLMs and explainability features, in particular modern text-image models, we aim to improve the synergy between human marketers and AI systems.
引用
收藏
页码:9299 / 9305
页数:7
相关论文
共 50 条
  • [21] Predictive Prompts with Joint Training of Large Language Models for Explainable Recommendation
    Lin, Ching-Sheng
    Tsai, Chung-Nan
    Su, Shao-Tang
    Jwo, Jung-Sing
    Lee, Cheng-Hsiung
    Wang, Xin
    MATHEMATICS, 2023, 11 (20)
  • [22] Integrating Knowledge Graph Data with Large Language Models for Explainable Inference
    Efrain Quintero-Narvaez, Carlos
    Monroy, Raul
    PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 1198 - 1199
  • [23] Explainable Fault Diagnosis of Control Systems Using Large Language Models
    Ojuolape, Adewumi Emmanuel
    Hu, Shanfeng
    2024 IEEE CONFERENCE ON CONTROL TECHNOLOGY AND APPLICATIONS, CCTA 2024, 2024, : 491 - 498
  • [24] A Tool for Explainable Pension Fund Recommendations using Large Language Models
    da Silva, Eduardo Alves
    Marinho, Leandro Balby
    de Moura, Edleno Silva
    da Silva, Altigran Soares
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 1184 - 1186
  • [25] Commonsense Reasoning and Explainable Artificial Intelligence Using Large Language Models
    Krause, Stefanie
    Stolzenburg, Frieder
    ARTIFICIAL INTELLIGENCE-ECAI 2023 INTERNATIONAL WORKSHOPS, PT 1, XAI3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, 2023, 2024, 1947 : 302 - 319
  • [26] Global reconstruction of language models with linguistic rules – Explainable AI for online consumer reviews
    Markus Binder
    Bernd Heinrich
    Marcus Hopf
    Alexander Schiller
    Electronic Markets, 2022, 32 : 2123 - 2138
  • [27] Global reconstruction of language models with linguistic rules - Explainable AI for online consumer reviews
    Binder, Markus
    Heinrich, Bernd
    Hopf, Marcus
    Schiller, Alexander
    ELECTRONIC MARKETS, 2022, 32 (04) : 2123 - 2138
  • [28] Explainable digital forensics AI: Towards mitigating distrust in AIbased digital forensics analysis using interpretable models
    Solanke, Abiodun A.
    FORENSIC SCIENCE INTERNATIONAL-DIGITAL INVESTIGATION, 2022, 42
  • [29] Toward the Adoption of Explainable Pre-Trained Large Language Models for Classifying Human-Written and AI-Generated Sentences
    Petrillo, Luca
    Martinelli, Fabio
    Santone, Antonella
    Mercaldo, Francesco
    ELECTRONICS, 2024, 13 (20)
  • [30] Dual use concerns of generative AI and large language models
    Grinbaum, Alexei
    Adomaitis, Laurynas
    JOURNAL OF RESPONSIBLE INNOVATION, 2024, 11 (01)