Against Opacity: Explainable AI and Large Language Models for Effective Digital Advertising

被引:2
|
作者
Yang, Qi [1 ]
Ongpin, Marlo [2 ]
Nikolenko, Sergey [1 ,3 ]
Huang, Alfred [2 ]
Farseev, Aleksandr [2 ]
机构
[1] ITMO Univ, St Petersburg, Russia
[2] SoMin Ai Res, Singapore, Singapore
[3] Steklov Inst Math, St Petersburg, Russia
基金
俄罗斯科学基金会;
关键词
Digital Advertising; Ads Performance Prediction; Deep Learning; Large Language Model; Explainable AI;
D O I
10.1145/3581783.3612817
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The opaqueness of modern digital advertising, exemplified by platforms such as Meta Ads, raises concerns regarding their autonomous control over audience targeting, pricing structures, and ad relevancy assessments. Locked in their leading positions by network effects, "Metas and Googles of the world" attract countless advertisers who rely on intuition, with billions of dollars lost on ineffective social media ads. The platforms' algorithms use huge amounts of data unavailable to advertisers, and the algorithms themselves are opaque as well. This lack of transparency hinders the advertisers' ability to make informed decisions and necessitates efforts to promote transparency, standardize industry metrics, and strengthen regulatory frameworks. In this work, we propose novel ways to assist marketers in optimizing their advertising strategies via machine learning techniques designed to analyze and evaluate content, in particular, predict the click-through rates (CTR) of novel advertising content. Another important problem is that large volumes of data available in the competitive landscape, e.g., competitors' ads, impede the ability of marketers to derive meaningful insights. This leads to a pressing need for a novel approach that would allow us to summarize and comprehend complex data. Inspired by the success of ChatGPT in bridging the gap between large language models (LLMs) and a broader non-technical audience, we propose a novel system that facilitates marketers in data interpretation, called SODA, that merges LLMs with explainable AI, enabling better human-AI collaboration with an emphasis on the domain of digital marketing and advertising. By combining LLMs and explainability features, in particular modern text-image models, we aim to improve the synergy between human marketers and AI systems.
引用
收藏
页码:9299 / 9305
页数:7
相关论文
共 50 条
  • [1] Dedicated to the special issue of 'Explainable/Trustworthy AI, and Large Language Models and Generative AI for Radiology'
    不详
    JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY, 2024, 85 (05): : 833 - 833
  • [2] Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models
    Attai, Kingsley
    Ekpenyong, Moses
    Amannah, Constance
    Asuquo, Daniel
    Ajuga, Peterben
    Obot, Okure
    Johnson, Ekemini
    John, Anietie
    Maduka, Omosivie
    Akwaowo, Christie
    Uzoka, Faith-Michael
    TROPICAL MEDICINE AND INFECTIOUS DISEASE, 2024, 9 (09)
  • [3] Detecting Homophobic Speech in Soccer Tweets Using Large Language Models and Explainable AI
    Santos, Guto Leoni
    dos Santos, Vitor Gaboardi
    Kearns, Colm
    Sinclair, Gary
    Black, Jack
    Doidge, Mark
    Fletcher, Thomas
    Kilvington, Dan
    Liston, Katie
    Endo, Patricia Takako
    Lynn, Theo
    SOCIAL NETWORKS ANALYSIS AND MINING, ASONAM 2024, PT I, 2025, 15211 : 489 - 504
  • [4] Large Language Models in Health Care: Charting a Path Toward Accurate, Explainable, and Secure AI
    Khullar, Dhruv
    Wang, Xingbo
    Wang, Fei
    JOURNAL OF GENERAL INTERNAL MEDICINE, 2024, 39 (07) : 1239 - 1241
  • [5] Effective depression detection and interpretation: Integrating machine learning, deep learning, language models, and explainable AI
    Al Masud, Gazi Hasan
    Shanto, Rejaul Islam
    Sakin, Ishmam
    Kabir, Muhammad Rafsan
    ARRAY, 2025, 25
  • [6] Exploring Advancements in Genomic Medicine: An Integrated Approach using explainable AI (XAI) and Large Language Models
    Tago, Shinichiro
    Murakami, Katsuhiko
    Takishita, Sho
    Morikawa, Hiroaki
    Kojima, Rikuhiro
    Abe, Shuya
    Yokoyama, Kazuaki
    Ogawa, Miho
    Fukushima, Hidehito
    Takamori, Hiroyuki
    Nannya, Yasuhito
    Imoto, Seiya
    Fuji, Masaru
    CANCER SCIENCE, 2025, 116 : 1057 - 1057
  • [7] Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs)
    Ehsan, Upol
    Watkins, Elizabeth Anne
    Wintersberger, Philipp
    Manger, Carina
    Kim, Sunnie S. Y.
    Van Berkel, Niels
    Riener, Andreas
    Riedl, Mark O.
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [8] Foundation Models, Generative AI, and Large Language Models
    Ross, Angela
    McGrow, Kathleen
    Zhi, Degui
    Rasmy, Laila
    CIN-COMPUTERS INFORMATICS NURSING, 2024, 42 (05) : 377 - 387
  • [9] QoEXplainer: Mediating Explainable Quality of Experience Models with Large Language Models
    Wehner, Nikolas
    Feldhus, Nils
    Seufert, Michael
    Moeller, Sebastian
    Hossfeld, Tobias
    2024 16TH INTERNATIONAL CONFERENCE ON QUALITY OF MULTIMEDIA EXPERIENCE, QOMEX 2024, 2024, : 72 - 75
  • [10] Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health
    Wen, Bo
    Norel, Raquel
    Liu, Julia
    Stappenbeck, Thaddeus
    Zulkernine, Farhana
    Chen, Huamin
    2024 IEEE INTERNATIONAL CONFERENCE ON DIGITAL HEALTH, ICDH 2024, 2024, : 104 - 113