Context-Guided Black-Box Attack for Visual Tracking

被引:0
|
作者
Huang, Xingsen [1 ,2 ]
Miao, Deshui [1 ]
Wang, Hongpeng [1 ,2 ]
Wang, Yaowei [2 ]
Li, Xin [2 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Target tracking; Feature extraction; Visualization; Transformers; Interference; Image reconstruction; Robustness; Visual tracking; adversarial attack;
D O I
10.1109/TMM.2024.3382473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the recent advancement of deep neural networks, visual tracking has achieved substantial progress in tracking accuracy. However, the robustness and security of tracking methods developed based on current deep models have not been thoroughly explored, a critical consideration for real-world applications. In this study, we propose a context-guided black-box attack method to investigate the robustness of recent advanced deep trackers against spatial and temporal interference. For spatial interference, the proposed algorithm generates adversarial target samples by mixing the information of the target object and the similar background regions around it in an embedded feature space of an encoder-decoder model, which evaluates the ability of trackers to handle background distractors. For temporal interference, we use the target state in the previous frame to generate the adversarial sample, which easily fools the trackers that rely too heavily on tracking prior assumptions, such as that the appearance changes and movements of a video target object are small between two consecutive frames. We assess the proposed attack method under both CNN-based and transformer-based tracking frameworks on four diverse datasets: OTB100, VOT2018, GOT-10 k, and LaSOT. The experimental results demonstrate that our approach substantially deteriorates the performance of all these deep trackers across numerous datasets, even in the black-box attack mode. This reveals the weak robustness of recent deep tracking methods against background distractors and prior dependencies.
引用
收藏
页码:8824 / 8835
页数:12
相关论文
共 50 条
  • [21] An Effective Way to Boost Black-Box Adversarial Attack
    Feng, Xinjie
    Yao, Hongxun
    Che, Wenbin
    Zhang, Shengping
    MULTIMEDIA MODELING (MMM 2020), PT I, 2020, 11961 : 393 - 404
  • [22] Generalizable Black-Box Adversarial Attack With Meta Learning
    Yin, Fei
    Zhang, Yong
    Wu, Baoyuan
    Feng, Yan
    Zhang, Jingyi
    Fan, Yanbo
    Yang, Yujiu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1804 - 1818
  • [23] Black-box Bayesian adversarial attack with transferable priors
    Zhang, Shudong
    Gao, Haichang
    Shu, Chao
    Cao, Xiwen
    Zhou, Yunyi
    He, Jianping
    MACHINE LEARNING, 2024, 113 (04) : 1511 - 1528
  • [24] A black-box adversarial attack on demand side management
    Cramer, Eike
    Gao, Ji
    COMPUTERS & CHEMICAL ENGINEERING, 2024, 186
  • [25] Adaptive hyperparameter optimization for black-box adversarial attack
    Zhenyu Guan
    Lixin Zhang
    Bohan Huang
    Bihe Zhao
    Song Bian
    International Journal of Information Security, 2023, 22 : 1765 - 1779
  • [26] Reverse Attack: Black-box Attacks on Collaborative Recommendation
    Zhang, Yihe
    Yuan, Xu
    Li, Jin
    Lou, Jiadong
    Chen, Li
    Tzeng, Nian-Feng
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 51 - 68
  • [27] SCHMIDT: IMAGE AUGMENTATION FOR BLACK-BOX ADVERSARIAL ATTACK
    Shi, Yucheng
    Han, Yahong
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [28] Accelerate Black-Box Attack with White-Box Prior Knowledge
    Cai, Jinghui
    Wang, Boyang
    Wang, Xiangfeng
    Jin, Bo
    INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING: BIG DATA AND MACHINE LEARNING, PT II, 2019, 11936 : 394 - 405
  • [29] Black-Box Adversarial Attack via Overlapped Shapes
    Williams, Phoenix
    Li, Ke
    Min, Geyong
    PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 467 - 468
  • [30] Black-box Bayesian adversarial attack with transferable priors
    Shudong Zhang
    Haichang Gao
    Chao Shu
    Xiwen Cao
    Yunyi Zhou
    Jianping He
    Machine Learning, 2024, 113 : 1511 - 1528