PARL: Poisoning Attacks Against Reinforcement Learning-based Recommender Systems

被引:1
|
作者
Du, Linkang [1 ]
Yuan, Quan [1 ]
Chen, Min [2 ]
Sun, Mingyang [1 ]
Cheng, Peng [1 ]
Chen, Jiming [1 ,3 ]
Zhang, Zhikun [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[2] CISPA Helmholtz Ctr Informat Secur, Saarbrucken, Germany
[3] Hangzhou Dianzi Univ, Hangzhou, Zhejiang, Peoples R China
关键词
Poisoning Attack; Recommender System; Reinforcement Learning;
D O I
10.1145/3634737.3637660
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recommender systems predict and suggest relevant options to users in various domains, such as e-commerce, streaming services, and social media. Recently, deep reinforcement learning (DRL)-based recommendation systems have become increasingly popular in academics and industry since DRL can characterize the long-term interaction between the system and users to achieve a better recommendation experience, e.g., Netflix, Spotify, Google, and YouTube. This paper demonstrates that an adversary can manipulate the DRL-based recommender system by injecting carefully designed user-system interaction records. The poisoning attack against the DRL-based recommender system is formulated as a non-convex integer programming problem. To solve the problem, we proposed a three-phase mechanism (called PARL) to maximize the hit ratio (the proportion of recommendations that result in actual user interactions, such as clicks, purchases, or other relevant actions) while avoiding easy detection. The core idea of PARL is to improve the ranking of the target item while fixing the rankings of other items. Considering the sequential decision-making characteristics of DRL, PARL rearranges the items' order of the fake users to mimic the normal users' sequential features, an aspect usually overlooked in existing work. Our experiments on three real-world datasets demonstrate the effectiveness of PARL and better concealment against the detection techniques. PARL is open-sourced at https://github.com/PARL-RS/PARL.
引用
收藏
页码:1331 / 1344
页数:14
相关论文
共 50 条
  • [21] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [22] Adversarial Attacks Against Machine Learning-Based Resource Provisioning Systems
    Nazari, Najmeh
    Makrani, Hosein Mohammadi
    Fang, Chongzhou
    Omidi, Behnam
    Rafatirad, Setareh
    Sayadi, Hossein
    Khasawneh, Khaled N.
    Homayoun, Houman
    IEEE MICRO, 2023, 43 (05) : 35 - 44
  • [23] Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection
    Lai, Yuan-Cheng
    Lin, Jheng-Yan
    Lin, Ying-Dar
    Hwang, Ren-Hung
    Lin, Po-Chin
    Wu, Hsiao-Kuang
    Chen, Chung-Kuan
    COMPUTERS & SECURITY, 2023, 129
  • [24] Reinforcement Learning-based Recommender Systems with Large Language Models for State Reward and Action Modeling
    Wang, Jie
    Karatzoglou, Alexandros
    Arapakis, Ioannis
    Jose, Joemon M.
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 375 - 385
  • [25] Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
    Zhang, Zifan
    Fang, Minghong
    Huang, Jiayuan
    Liu, Yuchen
    2024 23RD IFIP NETWORKING CONFERENCE, IFIP NETWORKING 2024, 2024, : 423 - 431
  • [26] Parameterizing poisoning attacks in federated learning-based intrusion detection
    Merzouk, Mohamed Amine
    Cuppens, Frederic
    Boulahia-Cuppens, Nora
    Yaich, Reda
    18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [27] Multiobjective Evaluation of Reinforcement Learning Based Recommender Systems
    Grishanov, Alexey
    Ianinat, Anastasia
    Vorontsov, Konstantin
    PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 622 - 627
  • [28] Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures
    Nguyen, Thanh Toan
    Hung, Nguyen Quoc Viet
    Nguyen, Thanh Tam
    Huynh, Thanh Trung
    Nguyen, Thanh Thi
    Weidlich, Matthias
    Yin, Hongzhi
    ACM COMPUTING SURVEYS, 2025, 57 (01)
  • [29] On the feasibility of crawling-based attacks against recommender systems
    Aiolli, Fabio
    Conti, Mauro
    Picek, Stjepan
    Polato, Mirko
    JOURNAL OF COMPUTER SECURITY, 2022, 30 (04) : 599 - 621
  • [30] Security Analysis of Poisoning Attacks Against Multi-agent Reinforcement Learning
    Xie, Zhiqiang
    Xiang, Yingxiao
    Li, Yike
    Zhao, Shuang
    Tong, Endong
    Niu, Wenjia
    Liu, Jiqiang
    Wang, Jian
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT I, 2022, 13155 : 660 - 675