PARL: Poisoning Attacks Against Reinforcement Learning-based Recommender Systems

被引:1
|
作者
Du, Linkang [1 ]
Yuan, Quan [1 ]
Chen, Min [2 ]
Sun, Mingyang [1 ]
Cheng, Peng [1 ]
Chen, Jiming [1 ,3 ]
Zhang, Zhikun [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[2] CISPA Helmholtz Ctr Informat Secur, Saarbrucken, Germany
[3] Hangzhou Dianzi Univ, Hangzhou, Zhejiang, Peoples R China
关键词
Poisoning Attack; Recommender System; Reinforcement Learning;
D O I
10.1145/3634737.3637660
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recommender systems predict and suggest relevant options to users in various domains, such as e-commerce, streaming services, and social media. Recently, deep reinforcement learning (DRL)-based recommendation systems have become increasingly popular in academics and industry since DRL can characterize the long-term interaction between the system and users to achieve a better recommendation experience, e.g., Netflix, Spotify, Google, and YouTube. This paper demonstrates that an adversary can manipulate the DRL-based recommender system by injecting carefully designed user-system interaction records. The poisoning attack against the DRL-based recommender system is formulated as a non-convex integer programming problem. To solve the problem, we proposed a three-phase mechanism (called PARL) to maximize the hit ratio (the proportion of recommendations that result in actual user interactions, such as clicks, purchases, or other relevant actions) while avoiding easy detection. The core idea of PARL is to improve the ranking of the target item while fixing the rankings of other items. Considering the sequential decision-making characteristics of DRL, PARL rearranges the items' order of the fake users to mimic the normal users' sequential features, an aspect usually overlooked in existing work. Our experiments on three real-world datasets demonstrate the effectiveness of PARL and better concealment against the detection techniques. PARL is open-sourced at https://github.com/PARL-RS/PARL.
引用
收藏
页码:1331 / 1344
页数:14
相关论文
共 50 条
  • [1] Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems
    Cao, Yuanjiang
    Chen, Xiaocong
    Yao, Lina
    Wang, Xianzhi
    Zhang, Wei Emma
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 1669 - 1672
  • [2] Data Poisoning Attacks to Deep Learning Based Recommender Systems
    Huang, Hai
    Mu, Jiaming
    Gong, Neil Zhenqiang
    Li, Qi
    Liu, Bin
    Xu, Mingwei
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [3] Contrastive State Augmentations for Reinforcement Learning-Based Recommender Systems
    Ren, Zhaochun
    Huang, Na
    Wang, Yidan
    Ren, Pengjie
    Ma, Jun
    Lei, Jiahuan
    Shi, Xinlei
    Luo, Hengliang
    Jose, Joemon
    Xin, Xin
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 922 - 931
  • [4] REVEAL 2022: Reinforcement Learning-Based Recommender Systems at Scale
    Li, Ying
    Basilico, Justin
    Raimond, Yves
    Dimakopoulou, Maria
    Liaw, Richard
    Bailey, Paige
    PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 684 - 685
  • [5] Data Poisoning Attacks against Differentially Private Recommender Systems
    Wadhwa, Soumya
    Agrawal, Saurabh
    Chaudhari, Harsh
    Sharma, Deepthi
    Achan, Kannan
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 1617 - 1620
  • [6] Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender Systems
    Wu, Yunfan
    Cao, Qi
    Tao, Shuchang
    Zhang, Kaike
    Sun, Fei
    Shen, Huawei
    PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024, 2024, : 701 - 711
  • [7] Poisoning Attacks to Graph-Based Recommender Systems
    Fang, Minghong
    Yang, Guolei
    Gong, Neil Zhenqiang
    Liu, Jia
    34TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSAC 2018), 2018, : 381 - 392
  • [8] Adversarial Attacks Against Reinforcement Learning-Based Portfolio Management Strategy
    Chen, Yu-Ying
    Chen, Chiao-Ting
    Sang, Chuan-Yun
    Yang, Yao-Chun
    Huang, Szu-Hao
    IEEE ACCESS, 2021, 9 : 50667 - 50685
  • [9] Poisoning attacks against knowledge graph-based recommendation systems using deep reinforcement learning
    Zih-Wun Wu
    Chiao-Ting Chen
    Szu-Hao Huang
    Neural Computing and Applications, 2022, 34 : 3097 - 3115
  • [10] Poisoning attacks against knowledge graph-based recommendation systems using deep reinforcement learning
    Wu, Zih-Wun
    Chen, Chiao-Ting
    Huang, Szu-Hao
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (04): : 3097 - 3115