Exploring the Impact of Data Poisoning Attacks on Machine Learning Model Reliability

被引:9
|
作者
Verde, Laura [1 ]
Marulli, Fiammetta [1 ]
Marrone, Stefano [1 ]
机构
[1] Univ Campania L Vanvitelli, Dept Maths & Phys, Caserta, Italy
关键词
Poisoned Big Data; Data Poisoning Attacks; Security; Reliability; Resilient Machine Learning; Disorders detection; Voice quality assessment;
D O I
10.1016/j.procs.2021.09.032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent years have seen the widespread adoption of Artificial Intelligence techniques in several domains, including healthcare, justice, assisted driving and Natural Language Processing (NLP) based applications (e.g., the Fake News detection). Those mentioned are just a few examples of some domains that are particularly critical and sensitive to the reliability of the adopted machine learning systems. Therefore, several Artificial Intelligence approaches were adopted as support to realize easy and reliable solutions aimed at improving the early diagnosis, personalized treatment, remote patient monitoring and better decision-making with a consequent reduction of healthcare costs. Recent studies have shown that these techniques are venerable to attacks by adversaries at phases of artificial intelligence. Poisoned data set are the most common attack to the reliability of Artificial Intelligence approaches. Noise, for example, can have a significant impact on the overall performance of a machine learning model. This study discusses the strength of impact of noise on classification algorithms. In detail, the reliability of several machine learning techniques to distinguish correctly pathological and healthy voices by analysing poisoning data was evaluated. Voice samples selected by available database, widely used in research sector, the Saarbruecken Voice Database, were processed and analysed to evaluate the resilience and classification accuracy of these techniques. All analyses are evaluated in terms of accuracy, specificity, sensitivity, F1-score and ROC area. (C) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://crativecommons.org/licenses/by-nc-nd/4.0) Peer-review under responsibility of the scientific committee of KES International.
引用
收藏
页码:2624 / 2632
页数:9
相关论文
共 50 条
  • [1] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [2] Data poisoning attacks against machine learning algorithms
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [3] Securing Machine Learning Against Data Poisoning Attacks
    Allheeib, Nasser
    INTERNATIONAL JOURNAL OF DATA WAREHOUSING AND MINING, 2024, 20 (01)
  • [4] Exploring Data and Model Poisoning Attacks to Deep Learning-Based NLP Systems
    Marulli, Fiammetta
    Verde, Laura
    Campanile, Lelio
    KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KSE 2021), 2021, 192 : 3570 - 3579
  • [5] Model poisoning attacks against distributed machine learning systems
    Tomsett, Richard
    Chan, Kevin
    Chakraborty, Supriyo
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [6] Poisoning Attacks on Fair Machine Learning
    Minh-Hao Van
    Du, Wei
    Wu, Xintao
    Lu, Aidong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT I, 2022, : 370 - 386
  • [7] Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
    Goldblum, Micah
    Tsipras, Dimitris
    Xie, Chulin
    Chen, Xinyun
    Schwarzschild, Avi
    Song, Dawn
    Madry, Aleksander
    Li, Bo
    Goldstein, Tom
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) : 1563 - 1580
  • [8] Mitigating Model Poisoning Attacks on Distributed Learning with Heterogeneous Data
    Xu, Jialt
    Yang, Guangfeng
    Zheng, Ziyan
    Huang, Shao-Lun
    22ND IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA 2023, 2023, : 738 - 743
  • [9] Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?
    Oprea, Alina
    Singhal, Anoop
    Vassilev, Apostol
    COMPUTER, 2022, 55 (11) : 94 - 99
  • [10] Robust Learning for Data Poisoning Attacks
    Wang, Yunjuan
    Mianjy, Poorya
    Arora, Raman
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7872 - 7881