MalDBA: Detection for Query-Based Malware Black-Box Adversarial Attacks

被引:0
|
作者
Kong, Zixiao [1 ]
Xue, Jingfeng [1 ]
Liu, Zhenyan [1 ]
Wang, Yong [1 ]
Han, Weijie [2 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
[2] Space Engn Univ, Sch Space Informat, Beijing 101416, Peoples R China
基金
中国国家自然科学基金;
关键词
stateful detection; adversarial defence; artificial intelligence security; privacy protection;
D O I
10.3390/electronics12071751
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The increasing popularity of Industry 4.0 has led to more and more security risks, and malware adversarial attacks emerge in an endless stream, posing great challenges to user data security and privacy protection. In this paper, we investigate the stateful detection method for artificial intelligence deep learning-based malware black-box attacks, i.e., determining the presence of adversarial attacks rather than detecting whether the input samples are malicious or not. To this end, we propose the MalDBA method for experiments on the VirusShare dataset. We find that query-based black-box attacks produce a series of highly similar historical query results (also known as intermediate samples). By comparing the similarity among these intermediate samples and the trend of prediction scores returned by the detector, we can detect the presence of adversarial samples in indexed samples and thus determine whether an adversarial attack has occurred, and then protect user data security and privacy. The experimental results show that the attack detection rate can reach 100%. Compared to similar studies, our method does not require heavy feature extraction tasks or image conversion and can be operated on complete PE files without requiring a strong hardware platform.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks
    Chen, Sizhe
    Huang, Zhehao
    Tao, Qinghua
    Wu, Yingwen
    Xie, Cihang
    Huang, Xiaolin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [22] Improving Black-box Adversarial Attacks with a Transfer-based Prior
    Cheng, Shuyu
    Dong, Yinpeng
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [23] Black-box Adversarial Attacks on Video Recognition Models
    Jiang, Linxi
    Ma, Xingjun
    Chen, Shaoxiang
    Bailey, James
    Jiang, Yu-Gang
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 864 - 872
  • [24] Black-box Adversarial Attacks in Autonomous Vehicle Technology
    Kumar, K. Naveen
    Vishnu, C.
    Mitra, Reshmi
    Mohan, C. Krishna
    2020 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR): TRUSTED COMPUTING, PRIVACY, AND SECURING MULTIMEDIA, 2020,
  • [25] GeoDA: a geometric framework for black-box adversarial attacks
    Rahmati, Ali
    Moosavi-Dezfooli, Seyed-Mohsen
    Frossard, Pascal
    Dai, Huaiyu
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 8443 - 8452
  • [26] Black-box adversarial attacks by manipulating image attributes
    Wei, Xingxing
    Guo, Ying
    Li, Bo
    INFORMATION SCIENCES, 2021, 550 : 285 - 296
  • [27] Physical Black-Box Adversarial Attacks Through Transformations
    Jiang, Wenbo
    Li, Hongwei
    Xu, Guowen
    Zhang, Tianwei
    Lu, Rongxing
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (03) : 964 - 974
  • [28] A review of black-box adversarial attacks on image classification
    Zhu, Yanfei
    Zhao, Yaochi
    Hu, Zhuhua
    Luo, Tan
    He, Like
    NEUROCOMPUTING, 2024, 610
  • [29] Boosting Black-Box Adversarial Attacks with Meta Learning
    Fu, Junjie
    Sun, Jian
    Wang, Gang
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 7308 - 7313
  • [30] Evaluation of Four Black-box Adversarial Attacks and Some Query-efficient Improvement Analysis
    Wang, Rui
    2022 PROGNOSTICS AND HEALTH MANAGEMENT CONFERENCE, PHM-LONDON 2022, 2022, : 298 - 302