Context-Guided Black-Box Attack for Visual Tracking

被引:0
|
作者
Huang, Xingsen [1 ,2 ]
Miao, Deshui [1 ]
Wang, Hongpeng [1 ,2 ]
Wang, Yaowei [2 ]
Li, Xin [2 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Target tracking; Feature extraction; Visualization; Transformers; Interference; Image reconstruction; Robustness; Visual tracking; adversarial attack;
D O I
10.1109/TMM.2024.3382473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the recent advancement of deep neural networks, visual tracking has achieved substantial progress in tracking accuracy. However, the robustness and security of tracking methods developed based on current deep models have not been thoroughly explored, a critical consideration for real-world applications. In this study, we propose a context-guided black-box attack method to investigate the robustness of recent advanced deep trackers against spatial and temporal interference. For spatial interference, the proposed algorithm generates adversarial target samples by mixing the information of the target object and the similar background regions around it in an embedded feature space of an encoder-decoder model, which evaluates the ability of trackers to handle background distractors. For temporal interference, we use the target state in the previous frame to generate the adversarial sample, which easily fools the trackers that rely too heavily on tracking prior assumptions, such as that the appearance changes and movements of a video target object are small between two consecutive frames. We assess the proposed attack method under both CNN-based and transformer-based tracking frameworks on four diverse datasets: OTB100, VOT2018, GOT-10 k, and LaSOT. The experimental results demonstrate that our approach substantially deteriorates the performance of all these deep trackers across numerous datasets, even in the black-box attack mode. This reveals the weak robustness of recent deep tracking methods against background distractors and prior dependencies.
引用
收藏
页码:8824 / 8835
页数:12
相关论文
共 50 条
  • [1] IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking
    Jia, Shuai
    Song, Yibing
    Ma, Chao
    Yang, Xiaokang
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 6705 - 6714
  • [2] Universal Low-Frequency Noise Black-Box Attack on Visual Object Tracking
    Hou, Hanting
    Bao, Huan
    Wei, Kaimin
    Wu, Yongdong
    SYMMETRY-BASEL, 2025, 17 (03):
  • [3] DIMBA: discretely masked black-box attack in single object tracking
    Xiangyu Yin
    Wenjie Ruan
    Jonathan Fieldsend
    Machine Learning, 2024, 113 : 1705 - 1723
  • [4] DIMBA: discretely masked black-box attack in single object tracking
    Yin, Xiangyu
    Ruan, Wenjie
    Fieldsend, Jonathan
    MACHINE LEARNING, 2024, 113 (04) : 1705 - 1723
  • [5] SIMULATOR ATTACK plus FOR BLACK-BOX ADVERSARIAL ATTACK
    Ji, Yimu
    Ding, Jianyu
    Chen, Zhiyu
    Wu, Fei
    Zhang, Chi
    Sun, Yiming
    Sun, Jing
    Liu, Shangdong
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 636 - 640
  • [6] Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks
    Hirose, Yudai
    Ono, Satoshi
    2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [7] Amora: Black-box Adversarial Morphing Attack
    Wang, Run
    Juefei-Xu, Felix
    Guo, Qing
    Huang, Yihao
    Xie, Xiaofei
    Ma, Lei
    Liu, Yang
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1376 - 1385
  • [8] Adversarial Eigen Attack on Black-Box Models
    Zhou, Linjun
    Cui, Peng
    Zhang, Xingxuan
    Jiang, Yinan
    Yang, Shiqiang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15233 - 15241
  • [9] A black-Box adversarial attack for poisoning clustering
    Cina, Antonio Emanuele
    Torcinovich, Alessandro
    Pelillo, Marcello
    PATTERN RECOGNITION, 2022, 122
  • [10] Saliency Attack: Towards Imperceptible Black-box Adversarial Attack
    Dai, Zeyu
    Liu, Shengcai
    Li, Qing
    Tang, Ke
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)