PARL: Poisoning Attacks Against Reinforcement Learning-based Recommender Systems

被引:1
|
作者
Du, Linkang [1 ]
Yuan, Quan [1 ]
Chen, Min [2 ]
Sun, Mingyang [1 ]
Cheng, Peng [1 ]
Chen, Jiming [1 ,3 ]
Zhang, Zhikun [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[2] CISPA Helmholtz Ctr Informat Secur, Saarbrucken, Germany
[3] Hangzhou Dianzi Univ, Hangzhou, Zhejiang, Peoples R China
关键词
Poisoning Attack; Recommender System; Reinforcement Learning;
D O I
10.1145/3634737.3637660
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recommender systems predict and suggest relevant options to users in various domains, such as e-commerce, streaming services, and social media. Recently, deep reinforcement learning (DRL)-based recommendation systems have become increasingly popular in academics and industry since DRL can characterize the long-term interaction between the system and users to achieve a better recommendation experience, e.g., Netflix, Spotify, Google, and YouTube. This paper demonstrates that an adversary can manipulate the DRL-based recommender system by injecting carefully designed user-system interaction records. The poisoning attack against the DRL-based recommender system is formulated as a non-convex integer programming problem. To solve the problem, we proposed a three-phase mechanism (called PARL) to maximize the hit ratio (the proportion of recommendations that result in actual user interactions, such as clicks, purchases, or other relevant actions) while avoiding easy detection. The core idea of PARL is to improve the ranking of the target item while fixing the rankings of other items. Considering the sequential decision-making characteristics of DRL, PARL rearranges the items' order of the fake users to mimic the normal users' sequential features, an aspect usually overlooked in existing work. Our experiments on three real-world datasets demonstrate the effectiveness of PARL and better concealment against the detection techniques. PARL is open-sourced at https://github.com/PARL-RS/PARL.
引用
收藏
页码:1331 / 1344
页数:14
相关论文
共 50 条
  • [41] Backdoor attacks against deep reinforcement learning based traffic signal control systems
    Heng Zhang
    Jun Gu
    Zhikun Zhang
    Linkang Du
    Yongmin Zhang
    Yan Ren
    Jian Zhang
    Hongran Li
    Peer-to-Peer Networking and Applications, 2023, 16 : 466 - 474
  • [42] Backdoor attacks against deep reinforcement learning based traffic signal control systems
    Zhang, Heng
    Gu, Jun
    Zhang, Zhikun
    Du, Linkang
    Zhang, Yongmin
    Ren, Yan
    Zhang, Jian
    Li, Hongran
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2023, 16 (01) : 466 - 474
  • [43] A Survey on Reinforcement Learning for Recommender Systems
    Lin, Yuanguo
    Liu, Yong
    Lin, Fan
    Zou, Lixin
    Wu, Pengcheng
    Zeng, Wenhua
    Chen, Huanhuan
    Miao, Chunyan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) : 13164 - 13184
  • [44] Impacting Robustness in Deep Learning-Based NIDS through Poisoning Attacks
    Alahmed, Shahad
    Alasad, Qutaiba
    Yuan, Jiann-Shiun
    Alawad, Mohammed
    ALGORITHMS, 2024, 17 (04)
  • [45] Membership Inference Attacks Against Recommender Systems
    Zhang, Minxing
    Ren, Zhaochun
    Wang, Zihan
    Ren, Pengjie
    Chen, Zhumin
    Hu, Pengfei
    Zhang, Yang
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 864 - 879
  • [46] Reinforcement Learning-Based Slice Isolation Against DDoS Attacks in Beyond 5G Networks
    Javadpour, Amir
    Ja'fari, Forough
    Taleb, Tarik
    Benzaid, Chafika
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (03): : 3930 - 3946
  • [47] Testing the Plasticity of Reinforcement Learning-based Systems
    Biagiola, Matteo
    Tonella, Paolo
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2022, 31 (04)
  • [48] Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks
    Zeng, Lanting
    Qiu, Dawei
    Sun, Mingyang
    APPLIED ENERGY, 2022, 324
  • [49] Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning
    Liu, Guanlin
    Lai, Lifeng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [50] Multi-Environment Training Against Reward Poisoning Attacks on Deep Reinforcement Learning
    Bouhaddi, Myria
    Adi, Kamel
    PROCEEDINGS OF THE 20TH INTERNATIONAL CONFERENCE ON SECURITY AND CRYPTOGRAPHY, SECRYPT 2023, 2023, : 870 - 875