Proving Robustness of KNN Against Adversarial Data Poisoning

被引:8
|
作者
Li, Yannan [1 ]
Wang, Jingbo [1 ]
Wang, Chao [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90089 USA
关键词
ANOMALY DETECTION; ATTACKS;
D O I
10.34727/2022/isbn.978-3-85448-053-2_6
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We propose a method for verifying data-poisoning robustness of the k-nearest neighbors (KNN) algorithm, which is a widely-used supervised learning technique. Data poisoning aims to corrupt a machine learning model and change its inference result by adding polluted elements into its training set. The inference result is considered n-poisoning robust if it cannot be changed by up-to-n polluted elements. Our method verifies n-poisoning robustness by soundly overapproximating the KNN algorithm to consider all possible scenarios in which polluted elements may affect the inference result. Unlike existing methods which only verify the inference phase but not the significantly more complex learning phase, our method is capable of verifying the entire KNN algorithm. Our experimental evaluation shows that the proposed method is also significantly more accurate than existing methods, and is able to prove the n-poisoning robustness of KNN for popular supervised-learning datasets.
引用
收藏
页码:7 / 16
页数:10
相关论文
共 50 条
  • [1] Systematic Testing of the Data-Poisoning Robustness of KNN
    Li, Yannan
    Wang, Jingbo
    Wang, Chao
    PROCEEDINGS OF THE 32ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2023, 2023, : 1207 - 1218
  • [2] Proving Data-Poisoning Robustness in Decision Trees
    Drews, Samuel
    Albarghouthi, Aws
    D'Antoni, Loris
    PROCEEDINGS OF THE 41ST ACM SIGPLAN CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION (PLDI '20), 2020, : 1083 - 1097
  • [3] Proving Data-Poisoning Robustness in Decision Trees
    Drews, Samuel
    Albarghouthi, Aws
    D'Antoni, Loris
    COMMUNICATIONS OF THE ACM, 2023, 66 (02) : 105 - 113
  • [4] Temporal Robustness against Data Poisoning
    Wang, Wenxiao
    Feizi, Soheil
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] On Collective Robustness of Bagging Against Data Poisoning
    Chen, Ruoxin
    Li, Zenan
    Li, Jie
    Wu, Chentao
    Yan, Junchi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [6] Denoising Autoencoder-Based Defensive Distillation as an Adversarial Robustness Algorithm Against Data Poisoning Attacks
    Badjie, Bakary
    Cecílio, José
    Casimiro, António
    Ada User Journal, 2023, 44 (03): : 209 - 213
  • [7] Towards Proving the Adversarial Robustness of Deep Neural Networks
    Katz, Guy
    Barrett, Clark
    Dill, David L.
    Julian, Kyle
    Kochenderfer, Mykel J.
    ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2017, (257): : 19 - 26
  • [8] A concealed poisoning attack to reduce deep neural networks' robustness against adversarial samples
    Zheng, Junhao
    Chan, Patrick P. K.
    Chi, Huiyang
    He, Zhimin
    INFORMATION SCIENCES, 2022, 615 : 758 - 773
  • [9] Adversarial data poisoning attacks against the PC learning algorithm
    Alsuwat, Emad
    Alsuwat, Hatim
    Valtorta, Marco
    Farkas, Csilla
    INTERNATIONAL JOURNAL OF GENERAL SYSTEMS, 2020, 49 (01) : 3 - 31
  • [10] Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
    Jia, Jinyuan
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 7961 - 7969