Gaming Algorithmic Hate-Speech Detection: Stakes, Parties, and Moves

被引:7
作者
Haapoja, Jesse [1 ,2 ]
Laaksonen, Salla-Maaria [3 ]
Lampinen, Airi [4 ]
机构
[1] Aalto Univ, Dept Comp Sci, Konemiehentie 2, Espoo 02150, Finland
[2] Univ Helsinki, Social Psychol, Helsinki, Finland
[3] Univ Helsinki, Ctr Consumer Soc Res, Helsinki, Finland
[4] Stockholm Univ, Human Comp Interact, Comp & Syst Sci Dept, Stockholm, Sweden
来源
SOCIAL MEDIA + SOCIETY | 2020年 / 6卷 / 02期
基金
芬兰科学院;
关键词
algorithmic systems; game metaphor; hate-speech; social media; elections; TWITTER; GAME; VISIBILITY; MODERATION;
D O I
10.1177/2056305120924778
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
A recent strand of research considers how algorithmic systems are gamed in everyday encounters. We add to this literature with a study that uses the game metaphor to examine a project where different organizations came together to create and deploy a machine learning model to detect hate speech from political candidates' social media messages during the Finnish 2017 municipal election. Using interviews and forum discussions as our primary research material, we illustrate how the unfolding game is played out on different levels in a multi-stakeholder situation, what roles different participants have in the game, and how strategies of gaming the model revolve around controlling the information available to it. We discuss strategies that different stakeholders planned or used to resist the model, and show how the game is not only played against the model itself, but also with those who have created it and those who oppose it. Our findings illustrate that while "gaming the system" is an important part of gaming with algorithms, these games have other levels where humans play against each other, rather than against technology. We also draw attention to how deploying a hate-speech detection algorithm can be understood as an effort to not only detect but also preempt unwanted behavior.
引用
收藏
页数:10
相关论文
共 44 条
[1]   Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions [J].
Alkhatib, Ali ;
Bernstein, Michael .
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
[2]   Theorizing animal-computer interaction as machinations [J].
Aspling, Fredrik ;
Juhlin, Oskar .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2017, 98 :135-149
[3]  
Bambauer J, 2018, NOTRE DAME LAW REV, V94, P1
[4]   The social power of algorithms [J].
Beer, David .
INFORMATION COMMUNICATION & SOCIETY, 2017, 20 (01) :1-13
[5]   Managing visibility on YouTube through algorithmic gossip [J].
Bishop, Sophie .
NEW MEDIA & SOCIETY, 2019, 21 (11-12) :2589-2606
[6]  
Brunton F., 2015, OBFUSCATION USERS GU
[7]  
Bucher T., 2018, Ifhorizontal ellipsisThen: Algorithmic Power and Politics
[8]   The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms [J].
Bucher, Taina .
INFORMATION COMMUNICATION & SOCIETY, 2017, 20 (01) :30-44
[9]   Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making [J].
Burnap, Pete ;
Williams, Matthew L. .
POLICY AND INTERNET, 2015, 7 (02) :223-242
[10]   The Rating Game: The Discipline of Uber's User-Generated Ratings [J].
Chan, Ngai Keung .
SURVEILLANCE & SOCIETY, 2019, 17 (1-2) :183-190