Measuring criticism of the police in the local news media using large language models

被引:0
|
作者
Crowl, Logan [1 ,2 ]
Dutta, Sujan [3 ]
Khudabukhsh, Ashiqur R. [3 ]
Severnini, Edson [4 ]
Nagin, Daniel S. [1 ]
机构
[1] Carnegie Mellon Univ, Heinz Coll Informat Syst & Publ Policy, Pittsburgh, PA 15213 USA
[2] Carnegie Mellon Univ, Machine Learning Dept, Pittsburgh, PA 15213 USA
[3] Rochester Inst Technol, Golisano Coll Comp & Informat Sci, Rochester, NY 14623 USA
[4] Boston Coll, Schiller Inst Integrated Sci & Soc, Chestnut Hill, MA 02457 USA
关键词
police; media; journalism; transfer learning; natural language inference; PERCEPTIONS; RACE;
D O I
10.1073/pnas.2418821122
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
High-profile incidents of police violence against Black citizens over the past decade have spawned contentious debates in the United States on the role of police. This debate has played out prominently in the news media, leading to a perception that media outlets have become more critical of the police. There is currently, however, little empirical evidence supporting this perceived shift. We construct a large dataset of local news reporting on the police from 2013 to 2023 in 10 politically diverse U.S. cities. Leveraging advanced language models, we measure criticism by analyzing whether reporting supports or is critical of two contentions: 1) that the police protect citizens and 2) that the police are racist. To validate this approach, we collect labels from members of different political parties. We find that contrary to public perceptions, local media criticism of the police has remained relatively stable along these two dimensions over the past decade. While criticism spiked in the aftermath of high-profile police killings, such as George Floyd's murder, these events did not produce sustained increases in negative police news. In fact, reporting supportive of police effectiveness has increased slightly since Floyd's death. We find only small differences in coverage trends in more conservative and more liberal cities, undermining the idea that local outlets cater to the politics of their audiences. Last, although Republicans are more likely to view a piece of news as supportive of the police than Democrats, readers across parties see reporting as no more critical than it was a decade ago.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Using Large Language Models to Improve Sentiment Analysis in Latvian Language
    Purvins, Pauls
    Urtans, Evalds
    Caune, Vairis
    BALTIC JOURNAL OF MODERN COMPUTING, 2024, 12 (02): : 165 - 175
  • [42] Driving and suppressing the human language network using large language models
    Tuckute, Greta
    Sathe, Aalok
    Srikant, Shashank
    Taliaferro, Maya
    Wang, Mingye
    Schrimpf, Martin
    Kay, Kendrick
    Fedorenko, Evelina
    NATURE HUMAN BEHAVIOUR, 2024, 8 (03) : 544 - 561
  • [43] Driving and suppressing the human language network using large language models
    Greta Tuckute
    Aalok Sathe
    Shashank Srikant
    Maya Taliaferro
    Mingye Wang
    Martin Schrimpf
    Kendrick Kay
    Evelina Fedorenko
    Nature Human Behaviour, 2024, 8 : 544 - 561
  • [44] Large Language Models-Based Local Explanations of Text Classifiers
    Angiulli, Fabrizio
    De Luca, Francesco
    Fassetti, Fabio
    Nistico, Simona
    DISCOVERY SCIENCE, DS 2024, PT I, 2025, 15243 : 19 - 35
  • [45] News Media Analysis Using Focused Crawl and Natural Language Processing: Case of Lithuanian News Websites
    Krilavicius, Tomas
    Medelis, Zygimantas
    Kapociute-Dzikiene, Jurgita
    Zalandauskas, Tomas
    INFORMATION AND SOFTWARE TECHNOLOGIES, 2012, 319 : 48 - +
  • [46] An Approach to Indexing and Clustering News Stories Using Continuous Language Models
    Bache, Richard
    Crestani, Fabio
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS, 2010, 6177 : 109 - +
  • [47] Measuring Impacts of Poisoning on Model Parameters and Embeddings for Large Language Models of Code
    Hussain, Aftab
    Rabin, Md Rafiqul Islam
    Alipour, Mohammad Amin
    PROCEEDINGS OF THE 1ST ACM INTERNATIONAL CONFERENCE ON AI-POWERED SOFTWARE, AIWARE 2024, 2024, : 59 - 64
  • [48] LEGALBENCH: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
    Guha, Neel
    Nyarko, Julian
    Ho, Daniel E.
    Re, Christopher
    Chilton, Adam
    Narayana, Aditya
    Chohlas-Wood, Alex
    Peters, Austin
    Waldon, Brandon
    Rockmore, Daniel N.
    Zambrano, Diego
    Talisman, Dmitry
    Hoque, Enam
    Surani, Faiz
    Fagan, Frank
    Sarfaty, Galit
    Dickinson, Gregory M.
    Porat, Haggai
    Hegland, Jason
    Wu, Jessica
    Nudell, Joe
    Niklaus, Joel
    Nay, John
    Choi, Jonathan H.
    Tobia, Kevin
    Hagan, Margaret
    Ma, Megan
    Livermore, Michael
    Rasumov-Rahe, Nikon
    Holzenberger, Nils
    Kolt, Noam
    Henderson, Peter
    Rehaag, Sean
    Goel, Sharad
    Gao, Shang
    Williams, Spencer
    Gandhi, Sunny
    Zur, Tom
    Iyer, Varun
    Li, Zehua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [49] Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
    Bang, Yejin
    Chen, Delong
    Lee, Nayeon
    Fung, Pascale
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 11142 - 11159
  • [50] Measuring Impacts of Poisoning on Model Parameters and Embeddings for Large Language Models of Code
    University of Houston, Houston, United States
    AIware - Proc. ACM Int. Conf. AI-Powered Softw., Co-located: ESEC/FSE, 1600, (59-64):