PORE: Provably Robust Recommender Systems against Data Poisoning Attacks

被引:0
|
作者
Jia, Jinyuan [1 ]
Liu, Yupei [2 ]
Hu, Yuepeng [2 ]
Gong, Neil Zhenqiang [2 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
[2] Duke Univ, Durham, NC 27706 USA
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Data poisoning attacks spoof a recommender system to make arbitrary, attacker-desired recommendations via injecting fake users with carefully crafted rating scores into the recommender system. We envision a cat-and-mouse game for such data poisoning attacks and their defenses, i.e., new defenses are designed to defend against existing attacks and new attacks are designed to break them. To prevent such cat-and-mouse game, we propose PORE, the first framework to build provably robust recommender systems in this work. PORE can transform any existing recommender system to be provably robust against any untargeted data poisoning attacks, which aim to reduce the overall performance of a recommender system. Suppose PORE recommends top-N items to a user when there is no attack. We prove that PORE still recommends at least r of the N items to the user under any data poisoning attack, where r is a function of the number of fake users in the attack. Moreover, we design an efficient algorithm to compute r for each user. We empirically evaluate PORE on popular benchmark datasets.
引用
收藏
页码:1703 / 1720
页数:18
相关论文
共 50 条
  • [31] Shilling attacks against collaborative recommender systems: a review
    Si, Mingdan
    Li, Qingshan
    ARTIFICIAL INTELLIGENCE REVIEW, 2020, 53 (01) : 291 - 319
  • [32] Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning
    Liu, Guanlin
    Lai, Lifeng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Data Poisoning Attacks against Autoregressive Models
    Alfeld, Scott
    Zhu, Xiaojin
    Barford, Paul
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1452 - 1458
  • [34] Disguised as Privacy: Data Poisoning Attacks Against Differentially Private Crowdsensing Systems
    Li, Zhetao
    Zheng, Zhirun
    Guo, Suiming
    Guo, Bin
    Xiao, Fu
    Ren, Kui
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (09) : 5155 - 5169
  • [35] Robust Mitigation Strategy Against Dummy Data Attacks in Power Systems
    Du, Min
    Liu, Xuan
    Li, Zuyi
    Lin, Hai
    IEEE TRANSACTIONS ON SMART GRID, 2023, 14 (04) : 3102 - 3113
  • [36] Defending Federated Recommender Systems against Untargeted Attacks: A Contribution-Aware Robust Aggregation Scheme
    Liang, Ruicheng
    Jiang, Yuanchun
    Zhu, Feida
    Cheng, Ling
    Liu, Wen
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2025, 19 (01)
  • [37] Tutorial: Toward Robust Deep Learning against Poisoning Attacks
    Chen, Huili
    Koushanfar, Farinaz
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2023, 22 (03)
  • [38] On the feasibility of crawling-based attacks against recommender systems
    Aiolli, Fabio
    Conti, Mauro
    Picek, Stjepan
    Polato, Mirko
    JOURNAL OF COMPUTER SECURITY, 2022, 30 (04) : 599 - 621
  • [39] Revisiting Adversarially Learned Injection Attacks Against Recommender Systems
    Tang, Jiaxi
    Wen, Hongyi
    Wang, Ke
    RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, : 318 - 327
  • [40] Debiasing Learning for Membership Inference Attacks Against Recommender Systems
    Wang, Zihan
    Huang, Na
    Sun, Fei
    Ren, Pengjie
    Chen, Zhumin
    Luo, Hengliang
    de Rijke, Maarten
    Ren, Zhaochun
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 1959 - 1968