Learning Feature Restoration Transformer for Robust Dehazing Visual Object Tracking

被引:0
|
作者
Xu, Tianyang [1 ]
Pan, Yifan [1 ]
Feng, Zhenhua [2 ,3 ]
Zhu, Xuefeng [1 ]
Cheng, Chunyang [1 ]
Wu, Xiao-Jun [1 ]
Kittler, Josef [2 ,3 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi 214122, Peoples R China
[2] Univ Surrey, Sch Comp Sci & Elect Engn, Guildford GU2 7XH, England
[3] Univ Surrey, Ctr Vis Speech & Signal Proc, Guildford GU27XH, England
基金
英国工程与自然科学研究理事会; 中国国家自然科学基金;
关键词
Visual object tracking; Dehazing system; Siamese tracker; Feature restoration;
D O I
10.1007/s11263-024-02182-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, deep-learning-based visual object tracking has obtained promising results. However, a drastic performance drop is observed when transferring a pre-trained model to changing weather conditions, such as hazy imaging scenarios, where the data distribution differs from that of a natural training set. This problem challenges the open-world practical applications of accurate target tracking. In principle, visual tracking performance relies on the discriminative degree of features between the target and its surroundings, rather than the image-level visual quality. To this end, we design a feature restoration transformer that adaptively enhances the representation capability of the extracted visual features for robust tracking in both natural and hazy scenarios. Specifically, a feature restoration transformer is constructed with dedicated self-attention hierarchies for the refinement of potentially contaminated deep feature maps. We endow the feature extraction process with a refinement mechanism typically for hazy imaging scenarios, establishing a tracking system that is robust against foggy videos. In essence, the feature restoration transformer is jointly trained with a Siamese tracking transformer. Intuitively, the supervision for learning discriminative and salient features is facilitated by the entire restoration tracking system. The experimental results obtained on hazy imaging scenarios demonstrate the merits and superiority of the proposed restoration tracking system, with complementary restoration power to image-level dehazing. In addition, consistent advantages of our design can be observed when generalised to different video attributes, demonstrating its capacity to deal with open-world scenarios.
引用
收藏
页码:6021 / 6038
页数:18
相关论文
共 50 条
  • [21] Robust Object Modeling for Visual Tracking
    Cai, Yidong
    Liu, Jie
    Tang, Jie
    Wu, Gangshan
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 9555 - 9566
  • [22] A Robust Framework for Visual Object Tracking
    Nguyen Dang Binh
    2009 IEEE-RIVF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION TECHNOLOGIES: RESEARCH, INNOVATION AND VISION FOR THE FUTURE, 2009, : 95 - 102
  • [23] Learning Rotation Adaptive Correlation Filters in Robust Visual Object Tracking
    Rout, Litu
    Raju, Priya Mariam
    Mishra, Deepak
    Gorthi, Rama Krishna Sai Subrahmanyam
    COMPUTER VISION - ACCV 2018, PT II, 2019, 11362 : 646 - 661
  • [24] Learning Variance Kernelized Correlation Filters for Robust Visual Object Tracking
    Liu, Chenghuan
    Huynh, Du Q.
    Reynolds, Mark
    2017 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING - TECHNIQUES AND APPLICATIONS (DICTA), 2017, : 567 - 574
  • [25] Comparative Object Similarity Learning-Based Robust Visual Tracking
    Yang, Weiming
    Liu, Yuliang
    Zhang, Quan
    Zheng, Yelong
    IEEE ACCESS, 2019, 7 : 50466 - 50475
  • [26] Robust Visual Tracking via Collaborative and Reinforced Convolutional Feature Learning
    Li, Dongdong
    Kuai, Yangliu
    Wen, Gongjian
    Liu, Li
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 592 - 600
  • [27] Learning correlation filters in independent feature channels for robust visual tracking
    Wang, Cailing
    Xu, Yechao
    Liu, Huajun
    Jing, Xiaoyuan
    PATTERN RECOGNITION LETTERS, 2019, 127 : 94 - 102
  • [28] DeepTrack: Learning Discriminative Feature Representations Online for Robust Visual Tracking
    Li, Hanxi
    Li, Yi
    Porikli, Fatih
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (04) : 1834 - 1848
  • [29] Temporal relation transformer for robust visual tracking with dual-memory learning
    Nie, Guohao
    Wang, Xingmei
    Yan, Zining
    Xu, Xiaoyuan
    Liu, Bo
    APPLIED SOFT COMPUTING, 2024, 167
  • [30] TFITrack: Transformer Feature Integration Network for Object Tracking
    Hu, Xiuhua
    Liu, Huan
    Li, Shuang
    Zhao, Jing
    Hui, Yan
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)