In Platforms We Trust?Unlocking the Black-Box of News Algorithms through Interpretable AI

被引:36
|
作者
Shin, Donghee [1 ]
Zaid, Bouziane [2 ]
Biocca, Frank [3 ]
Rasul, Azmat [1 ]
机构
[1] Zayed Univ, Coll Commun & Media Sci, POB 144534, Abu Dhabi, U Arab Emirates
[2] Univ Sharjah, Coll Commun, Sharjah, U Arab Emirates
[3] New Jersey Inst Technol, Dept Informat, Newark, NJ 07102 USA
关键词
SELF-DISCLOSURE; SOCIAL MEDIA; INFORMATION;
D O I
10.1080/08838151.2022.2057984
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
With the rapid increase in the use and implementation of AI in the journalism industry, the ethical issues of algorithmic journalism have grown rapidly and resulted in a large body of research that applied normative principles such as privacy, information disclosure, and data protection. Understanding how users' information processing leads to information disclosure in platformized news contexts can be important questions to ask. We examine users' cognitive routes leading to information disclosure by testing the effect of interpretability on privacy in algorithmic journalism. We discuss algorithmic information processing and show how the process can be utilized to improve user privacy and trust.
引用
收藏
页码:235 / 256
页数:22
相关论文
共 50 条
  • [31] Reply to 'Can we trust the black box?'
    Zhong, Haoyan
    Poeran, Jashvant
    Memtsoudis, Stavros G.
    Liu, Jiabin
    REGIONAL ANESTHESIA AND PAIN MEDICINE, 2022, 47 (05) : 338 - 339
  • [32] Can we open the black box of AI?
    Castelvecchi D.
    Nature, 2016, 538 (7623) : 20 - 23
  • [33] Hybrid Predictive Models: When an Interpretable Model Collaborates with a Black-box Model
    Wang, Tong
    Lin, Qihang
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [34] Hybrid predictive models: when an interpretable model collaborates with a black-box model
    Wang, Tong
    Lin, Qihang
    Journal of Machine Learning Research, 2021, 22
  • [35] Lost in a black-box? Interpretable machine learning for assessing Italian SMEs default
    Crosato, Lisa
    Liberati, Caterina
    Repetto, Marco
    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, 2023, 39 (06) : 829 - 846
  • [36] Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
    Rosenzweig, Julia
    Sicking, Joachim
    Houben, Sebastian
    Mock, Michael
    Akila, Maram
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 56 - 65
  • [37] Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information
    Zhene, Baolin
    Jiang, Peipei
    Wang, Qian
    Li, Qi
    Shen, Chao
    Wang, Cong
    Ge, Yunjie
    Teng, Qingyang
    Zhang, Shenyi
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 86 - 107
  • [38] On the Black-Box Challenge for Fraud Detection Using Machine Learning (II): Nonlinear Analysis through Interpretable Autoencoders
    Chaquet-Ulldemolins, Jacobo
    Gimeno-Blanes, Francisco-Javier
    Moral-Rubio, Santiago
    Munoz-Romero, Sergio
    Rojo-Alvarez, Jose-Luis
    APPLIED SCIENCES-BASEL, 2022, 12 (08):
  • [39] Research Status of Black-Box Intelligent Adversarial Attack Algorithms
    Wei, Jian
    Song, Xiaoqing
    Wang, Qinzhao
    Computer Engineering and Applications, 2023, 59 (13) : 61 - 73
  • [40] A Black-box Method for Accelerating Measurement Algorithms with Accuracy Guarantees
    Ben Basat, Ran
    Einziger, Gil
    Luizelli, Marcelo Caggiani
    Waisbard, Erez
    2019 IFIP NETWORKING CONFERENCE (IFIP NETWORKING), 2019,