Adversarial attack can help visual tracking

被引:0
|
作者
Cho, Sungmin [1 ]
Kim, Hyeseong [1 ]
Kim, Ji Soo [1 ]
Kim, Hyomin [1 ]
Kwon, Junseok [1 ]
机构
[1] Chung Ang Univ, Sch Comp Sci & Engn, Seoul, South Korea
关键词
Adversarial Attack; Visual Tracking; Noise-injected Markov chain Monte Carlo;
D O I
10.1007/s11042-022-12789-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present a novel noise-injected Markov chain Monte Carlo (NMCMC) method for visual tracking, which enables fast convergence through adversarial attacks. The proposed NMCMC consists of three steps: noise-injected proposal, acceptance, and validation. We intentionally inject noise into the proposal function to cause a shift in a direction that is opposite to the moving direction of a target, which is viewed in the context of an adversarial attack. This noise injection mathematically induces the proposed visual tracker to find a target proposal distribution using a small number of samples, which allows the tracker to be robust to drifting. Experimental results demonstrate that our method achieves state-of-the-art performance, especially when severe perturbations caused by an adversarial attack exist in the target state.
引用
收藏
页码:35283 / 35292
页数:10
相关论文
共 50 条
  • [31] IMPROVED REAL-TIME VISUAL TRACKING VIA ADVERSARIAL LEARNING
    Zhong, Haoxiang
    Yan, Xiyu
    Jiang, Yong
    Xia, Shu-Tao
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1853 - 1857
  • [32] I-VITAL: Information aided visual tracking with adversarial learning
    Dasari, Mohana Murali
    Kuchibhotla, Hari Chandana
    Rajiv, Aravind
    Gorthi, Rama Krishna
    DISPLAYS, 2023, 77
  • [33] Topology-aware universal adversarial attack on 3D object tracking
    Riran Cheng
    Xupeng Wang
    Ferdous Sohel
    Hang Lei
    Visual Intelligence, 1 (1):
  • [34] Can Generative Adversarial Networks help to overcome the limited data problem in segmentation
    Heilemann, Gerd
    Matthewman, Mark
    Kuess, Peter
    Goldner, Gregor
    Widder, Joachim
    Georg, Dietmar
    Zimmermann, Lukas
    ZEITSCHRIFT FUR MEDIZINISCHE PHYSIK, 2022, 32 (03): : 361 - 368
  • [35] Adversarial Attack against Modeling Attack on PUFs
    Wang, Sying-Jyan
    Chen, Yu-Shen
    Li, Katherine Shu-Min
    PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
  • [36] Adversarial Attacks Impact on the Neural Network Performance and Visual Perception of Data under Attack
    Usoltsev, Yakov
    Lodonova, Balzhit
    Shelupanov, Alexander
    Konev, Anton
    Kostyuchenko, Evgeny
    INFORMATION, 2022, 13 (02)
  • [37] Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks
    Hirose, Yudai
    Ono, Satoshi
    2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [38] Context-Guided Black-Box Attack for Visual Tracking
    Huang, Xingsen
    Miao, Deshui
    Wang, Hongpeng
    Wang, Yaowei
    Li, Xin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 8824 - 8835
  • [39] TransNoise: Transferable Universal Adversarial Noise for Adversarial Attack
    Wei, Yier
    Gao, Haichang
    Wang, Yufei
    Liu, Huan
    Gao, Yipeng
    Luo, Sainan
    Guo, Qianwen
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 193 - 205
  • [40] Multi-Model UNet: An Adversarial Defense Mechanism for Robust Visual Tracking
    Suttapak, Wattanapong
    Zhang, Jianfu
    Zhao, Haohuo
    Zhang, Liqing
    NEURAL PROCESSING LETTERS, 2024, 56 (02)