Adversarial attack can help visual tracking

被引:0
|
作者
Cho, Sungmin [1 ]
Kim, Hyeseong [1 ]
Kim, Ji Soo [1 ]
Kim, Hyomin [1 ]
Kwon, Junseok [1 ]
机构
[1] Chung Ang Univ, Sch Comp Sci & Engn, Seoul, South Korea
关键词
Adversarial Attack; Visual Tracking; Noise-injected Markov chain Monte Carlo;
D O I
10.1007/s11042-022-12789-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present a novel noise-injected Markov chain Monte Carlo (NMCMC) method for visual tracking, which enables fast convergence through adversarial attacks. The proposed NMCMC consists of three steps: noise-injected proposal, acceptance, and validation. We intentionally inject noise into the proposal function to cause a shift in a direction that is opposite to the moving direction of a target, which is viewed in the context of an adversarial attack. This noise injection mathematically induces the proposed visual tracker to find a target proposal distribution using a small number of samples, which allows the tracker to be robust to drifting. Experimental results demonstrate that our method achieves state-of-the-art performance, especially when severe perturbations caused by an adversarial attack exist in the target state.
引用
收藏
页码:35283 / 35292
页数:10
相关论文
共 50 条
  • [21] Imperceptible adversarial attack via spectral sensitivity of human visual system
    Chiang, Chen-Kuo
    Lin, Ying-Dar
    Hwang, Ren-Hung
    Lin, Po-Ching
    Chang, Shih-Ya
    Li, Hao-Ting
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (20) : 59291 - 59315
  • [22] Adversarial attack and defense algorithms towards space visual object detection
    Zhou D.
    Sun G.-H.
    Wu L.-G.
    Kongzhi yu Juece/Control and Decision, 2024, 39 (07): : 2161 - 2168
  • [23] Towards universal and sparse adversarial examples for visual object tracking
    Sheng, Jingjing
    Zhang, Dawei
    Chen, Jianxin
    Xiao, Xin
    Zheng, Zhonglong
    APPLIED SOFT COMPUTING, 2024, 153
  • [24] Can We Mitigate Backdoor Attack Using Adversarial Detection Methods?
    Jin, Kaidi
    Zhang, Tianwei
    Shen, Chao
    Chen, Yufei
    Fan, Ming
    Lin, Chenhao
    Liu, Ting
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (04) : 2867 - 2881
  • [25] Disease surveillance: Can AI help a biaterrorist detect attack?
    Ingebretsen, Mark
    IEEE INTELLIGENT SYSTEMS, 2007, 22 (06) : 4 - 6
  • [26] Transferable Adversarial Attack on 3D Object Tracking in Point Cloud
    Liu, Xiaoqiong
    Lin, Yuewei
    Yang, Qing
    Fan, Heng
    MULTIMEDIA MODELING, MMM 2023, PT II, 2023, 13834 : 446 - 458
  • [27] Decoupling visual and identity features for adversarial palm-vein image attack
    Yang, Jiacheng
    Wong, Wai Keung
    Fei, Lunke
    Zhao, Shuping
    Wen, Jie
    Teng, Shaohua
    NEURAL NETWORKS, 2024, 180
  • [28] Optical Adversarial Attack
    Gnanasambandam, Abhiram
    Sherman, Alex M.
    Chan, Stanley H.
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 92 - 101
  • [29] Distributionally Adversarial Attack
    Zheng, Tianhang
    Chen, Changyou
    Ren, Kui
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 2253 - 2260
  • [30] Unsupervised cycle-consistent adversarial attacks for visual object tracking
    Yao, Rui
    Zhu, Xiangbin
    Zhou, Yong
    Shao, Zhiwen
    Hu, Fuyuan
    Zhang, Yanning
    DISPLAYS, 2023, 80