In Platforms We Trust?Unlocking the Black-Box of News Algorithms through Interpretable AI

被引:36
|
作者
Shin, Donghee [1 ]
Zaid, Bouziane [2 ]
Biocca, Frank [3 ]
Rasul, Azmat [1 ]
机构
[1] Zayed Univ, Coll Commun & Media Sci, POB 144534, Abu Dhabi, U Arab Emirates
[2] Univ Sharjah, Coll Commun, Sharjah, U Arab Emirates
[3] New Jersey Inst Technol, Dept Informat, Newark, NJ 07102 USA
关键词
SELF-DISCLOSURE; SOCIAL MEDIA; INFORMATION;
D O I
10.1080/08838151.2022.2057984
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
With the rapid increase in the use and implementation of AI in the journalism industry, the ethical issues of algorithmic journalism have grown rapidly and resulted in a large body of research that applied normative principles such as privacy, information disclosure, and data protection. Understanding how users' information processing leads to information disclosure in platformized news contexts can be important questions to ask. We examine users' cognitive routes leading to information disclosure by testing the effect of interpretability on privacy in algorithmic journalism. We discuss algorithmic information processing and show how the process can be utilized to improve user privacy and trust.
引用
收藏
页码:235 / 256
页数:22
相关论文
共 50 条
  • [1] Measurable Trust: The Key to Unlocking User Confidence in Black-Box AI
    Palazzolo, Puntis
    Stahl, Bernd
    Webb, Helena
    PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TRUSTWORTHY AUTONOMOUS SYSTEMS, TAS 2024, 2024,
  • [2] Enhancement of Privacy and Trust Through Interpretable Artificial Intelligence: Unlocking Algorithm Black Box
    Prakash, R. Vijay
    Dash, Kishor Kumar
    Sastry, R. V. L. S. N.
    Tandon, Shilpa
    Upadhyaya, Makarand
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON DATA SCIENCE, MACHINE LEARNING AND APPLICATIONS, VOL 1, ICDSMLA 2023, 2025, 1273 : 510 - 518
  • [3] We might be afraid of black-box algorithms
    Veliz, Carissa
    Prunkl, Carina
    Phillips-Brown, Milo
    Lechterman, Theodore M.
    JOURNAL OF MEDICAL ETHICS, 2021, 47 (05) : 339 - 340
  • [4] Users' trust in black-box machine learning algorithms
    Nakashima, Heitor Hoffman
    Mantovani, Daielly
    Machado Junior, Celso
    REGE-REVISTA DE GESTAO, 2024, 31 (02): : 237 - 250
  • [5] Interpretable Companions for Black-Box Models
    Pan, Danqing
    Wang, Tong
    Hara, Satoshi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2444 - 2453
  • [6] Black-Box Model Explained Through an Assessment of Its Interpretable Features
    Ventura, Francesco
    Cerquitelli, Tania
    Giacalone, Francesco
    NEW TRENDS IN DATABASES AND INFORMATION SYSTEMS, ADBIS 2018, 2018, 909 : 138 - 149
  • [7] Establishing trust in black-box programs
    Xia, Ying
    Fairbanks, Kevin
    Owen, Henry
    PROCEEDINGS IEEE SOUTHEASTCON 2007, VOLS 1 AND 2, 2007, : 462 - 465
  • [8] Faster Black-Box Algorithms Through Higher Arity Operators
    Doerr, Benjamin
    Johannsen, Daniel
    Koetzing, Timo
    Lehre, Per Kristian
    Wagner, Markus
    Winzen, Carola
    FOGA 11: PROCEEDINGS OF THE 2011 ACM/SIGEVO FOUNDATIONS OF GENETIC ALGORITHMS XI, 2011, : 163 - 171
  • [9] Transparency and the Black Box Problem: Why We Do Not Trust AI
    von Eschenbach W.J.
    Philosophy & Technology, 2021, 34 (4) : 1607 - 1622
  • [10] Learning outside the Black-Box: The pursuit of interpretable models
    Crabbe, Jonathan
    Zhang, Yao
    Zame, William R.
    van der Schaar, Mihaela
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33