Exploring the Impact of Data Poisoning Attacks on Machine Learning Model Reliability

被引:9
|
作者
Verde, Laura [1 ]
Marulli, Fiammetta [1 ]
Marrone, Stefano [1 ]
机构
[1] Univ Campania L Vanvitelli, Dept Maths & Phys, Caserta, Italy
关键词
Poisoned Big Data; Data Poisoning Attacks; Security; Reliability; Resilient Machine Learning; Disorders detection; Voice quality assessment;
D O I
10.1016/j.procs.2021.09.032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent years have seen the widespread adoption of Artificial Intelligence techniques in several domains, including healthcare, justice, assisted driving and Natural Language Processing (NLP) based applications (e.g., the Fake News detection). Those mentioned are just a few examples of some domains that are particularly critical and sensitive to the reliability of the adopted machine learning systems. Therefore, several Artificial Intelligence approaches were adopted as support to realize easy and reliable solutions aimed at improving the early diagnosis, personalized treatment, remote patient monitoring and better decision-making with a consequent reduction of healthcare costs. Recent studies have shown that these techniques are venerable to attacks by adversaries at phases of artificial intelligence. Poisoned data set are the most common attack to the reliability of Artificial Intelligence approaches. Noise, for example, can have a significant impact on the overall performance of a machine learning model. This study discusses the strength of impact of noise on classification algorithms. In detail, the reliability of several machine learning techniques to distinguish correctly pathological and healthy voices by analysing poisoning data was evaluated. Voice samples selected by available database, widely used in research sector, the Saarbruecken Voice Database, were processed and analysed to evaluate the resilience and classification accuracy of these techniques. All analyses are evaluated in terms of accuracy, specificity, sensitivity, F1-score and ROC area. (C) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://crativecommons.org/licenses/by-nc-nd/4.0) Peer-review under responsibility of the scientific committee of KES International.
引用
收藏
页码:2624 / 2632
页数:9
相关论文
共 50 条
  • [41] CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
    Bansal, Hritik
    Singhi, Nishad
    Yang, Yu
    Yin, Fan
    Grover, Aditya
    Chang, Kai-Wei
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 112 - 123
  • [42] Detection and Mitigation of Targeted Data Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH), 2022, : 271 - 278
  • [43] Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
    Wan, Yichen
    Qu, Youyang
    Ni, Wei
    Xiang, Yong
    Gao, Longxiang
    Hossain, Ekram
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2024, 26 (03): : 1861 - 1897
  • [44] Secure and Efficient Federated Learning Against Model Poisoning Attacks in Horizontal and Vertical Data Partitioning
    Yu, Chong
    Meng, Zhenyu
    Zhang, Wenmiao
    Lei, Lei
    Ni, Jianbing
    Zhang, Kuan
    Zhao, Hai
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [45] Deceiving supervised machine learning models via adversarial data poisoning attacks: a case study with USB keyboards
    Chillara, Anil Kumar
    Saxena, Paresh
    Maiti, Rajib Ranjan
    Gupta, Manik
    Kondapalli, Raghu
    Zhang, Zhichao
    Kesavan, Krishnakumar
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2024, 23 (03) : 2043 - 2061
  • [46] Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning
    Tian, Yuchen
    Zhang, Weizhe
    Simpson, Andrew
    Liu, Yang
    Jiang, Zoe Lin
    COMPUTER JOURNAL, 2023, 66 (03): : 711 - 726
  • [47] A topological data analysis approach for detecting data poisoning attacks against machine learning based network intrusion detection systems
    Monkam, Galamo F.
    De Lucia, Michael J.
    Bastian, Nathaniel D.
    COMPUTERS & SECURITY, 2024, 144
  • [48] Assessing the Impact of Temporal Data Aggregation on the Reliability of Predictive Machine Learning Models
    Barhrhouj, Ayah
    Ananou, Bouchra
    Ouladsine, Mustapha
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2024, PT I, 2025, 15346 : 481 - 492
  • [49] Machine learning for automated content analysis: characteristics of training data impact reliability
    Fussell, Rebeckah
    Mazrui, Ali
    Holmes, N. G.
    2022 PHYSICS EDUCATION RESEARCH CONFERENCE (PERC), 2022, : 194 - 199
  • [50] Data Poisoning Attacks and Defenses in Dynamic Crowdsourcing With Online Data Quality Learning
    Zhao, Yuxi
    Gong, Xiaowen
    Lin, Fuhong
    Chen, Xu
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (05) : 2569 - 2581