Towards universal and sparse adversarial examples for visual object tracking

被引:4
|
作者
Sheng, Jingjing [1 ]
Zhang, Dawei [1 ]
Chen, Jianxin [2 ]
Xiao, Xin [3 ]
Zheng, Zhonglong [1 ]
机构
[1] Zhejiang Normal Univ, Sch Comp Sci & Technol, Jinhua 321000, Zhejiang, Peoples R China
[2] Jiaxing Univ, Jiaxing 314200, Zhejiang, Peoples R China
[3] Zhejiang Normal Univ, Coll Educ, Jinhua 321000, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial attack; Object tracking; Adversarial examples; Black-box attack; Interference patch;
D O I
10.1016/j.asoc.2024.111252
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attack is aimed to add small perturbations to the model that are imperceptible to humans, resulting in incorrect outputs with high confidence. Currently, adversarial attack mainly focuses on image classification and object detection tasks, but are insufficient in visual tracking. Nevertheless, existing attack methods for object tracking are limited to Siamese networks, with other types of trackers being infrequently targeted. In order to expand the usage of adversarial attacks in object tracking, we propose a model-free black-box framework for learning to generate universal and sparse adversarial examples (USAE) for tracking task. To this end, we first randomly add a noisy patch to any interference image, and then apply standard projected gradient descent to optimize the generation process of adversarial examples which is subjected to a similarity constraint with original images, making its embedding feature closer to the patched interference image in ������2-norm. Consequently, there is no significant difference between adversarial images and original images for human vision, but leading to tracking failure. Furthermore, our method just attacks 50 original images with adversarial images in each sequence, rather than an entire dataset. Numerous experiments on VOT2018, OTB2013, OTB2015, and GOT-10k datasets verify the effectiveness of USAE attack. Specifically, the number of lost reaches 1180 on VOT2018, the precision of OTB2015 decreased by 42.1%, and the success rate of GOT-10k is reduced to 1.8%, which shows the attack effect is remarkable. Moreover, USAE has a good transferability among various trackers like SiamRPN++, ATOM, DiMP, KYS, and ToMP. Notice that the proposed method is black-box and applicable to most realistic scenarios.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Siamese adversarial network for object tracking
    Kim, H. -I.
    Park, R. -H.
    ELECTRONICS LETTERS, 2019, 55 (02) : 88 - +
  • [32] Universal Website Fingerprinting Defense Based on Adversarial Examples
    Hou, Chengshang
    Shi, Junzheng
    Cui, Mingxin
    Liu, Mengyan
    Yu, Jing
    2021 IEEE 20TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2021), 2021, : 99 - 106
  • [33] A Universal Detection Method for Adversarial Examples and Fake Images
    Lai, Jiewei
    Huo, Yantong
    Hou, Ruitao
    Wang, Xianmin
    SENSORS, 2022, 22 (09)
  • [34] Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark
    Xu, Yonghao
    Ghamisi, Pedram
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [35] Universal adversarial perturbations against object detection
    Li, Debang
    Zhang, Junge
    Huang, Kaiqi
    PATTERN RECOGNITION, 2021, 110
  • [36] An Empirical Study Towards SAR Adversarial Examples
    Zhang, Zhiwei
    Liu, Shuowei
    Gao, Xunzhang
    Diao, Yujia
    2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, : 127 - 132
  • [37] Towards robust classification detection for adversarial examples
    Liu, Huangxiaolie
    Zhang, Dong
    Chen, Huijun
    INTERNATIONAL CONFERENCE FOR INTERNET TECHNOLOGY AND SECURED TRANSACTIONS (ICITST-2020), 2020, : 23 - 29
  • [38] Towards Undetectable Adversarial Examples: A Steganographic Perspective
    Zeng, Hui
    Chen, Biwei
    Yang, Rongsong
    Li, Chenggang
    Peng, Anjie
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 172 - 183
  • [39] An Empirical Study Towards SAR Adversarial Examples
    Zhang, Zhiwei
    Gao, Xunzhang
    Liu, Shuowei
    Diao, Yujia
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 1144 - 1148
  • [40] DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors
    Vellaichamy, Sivapriya
    Hull, Matthew
    Wang, Zijie J.
    Das, Nilaksh
    Peng, Shengyun
    Park, Haekyu
    Chau, Duen Horng Polo
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 21452 - 21459