A Joint Siamese Attention-Aware Network for Vehicle Object Tracking in Satellite Videos

被引:21
|
作者
Song, Wei [1 ]
Jiao, Licheng [1 ]
Liu, Fang [1 ]
Liu, Xu [1 ]
Li, Lingling [1 ]
Yang, Shuyuan [1 ]
Hou, Biao [1 ]
Zhang, Wenhua [1 ]
机构
[1] Xidian Univ, Int Res Ctr Intelligent Percept & Computat, Sch Artificial Intelligence,Joint Int Res Lab Int, Minist Educ,Key Lab Intelligent Percept & Image U, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Remote sensing; Object tracking; Videos; Correlation; Satellites; Feature extraction; Convergence; Attention mechanism; satellite videos; Siamese tracker; vehicle object tracking;
D O I
10.1109/TGRS.2022.3184755
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Remote sensing object tracking (RSOT) is a novel and challenging problem due to the negative effects of weak features and background noise. In this article, from the perspective of attention-focus deep learning, we propose a joint Siamese attention-aware network (JSANet) for efficient remote sensing tracking which contains both the self-attention and cross-attention modules. First, the self-attention modules we propose emphasize the interdependent channel-wise coefficient via channel attention and conduct corresponding space transformation of spatial domain information with spatial attention. Second, the cross-attention is designed to aggregate rich contextual interdependencies between the Siamese branches via channel attention and excavate association produces reliable correspondence with spatial attention. In addition, a composite feature combine strategy is designed to fuse multiple attention features. Experimental results on the Jilin-1 satellite video datasets demonstrate that the proposed JSANet achieves state-of-the-art performance in terms of precision and success rate, demonstrating the effectiveness of the proposed methods.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Deep Siamese Network With Motion Fitting for Object Tracking in Satellite Videos
    Ruan, Lu
    Guo, Yujia
    Yang, Daiqin
    Chen, Zhenzhong
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [2] Object Tracking in Satellite Videos Based on Siamese Network With Multidimensional Information-Aware and Temporal Motion Compensation
    Nie, Yidan
    Bian, Chunjiang
    Li, Ligang
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [3] Object Tracking in Unmanned Aerial Vehicle Videos via Multifeature Discrimination and Instance-Aware Attention Network
    Zhang, Shiyu
    Zhuo, Li
    Zhang, Hui
    Li, Jiafeng
    REMOTE SENSING, 2020, 12 (16)
  • [4] SiamBAG: Band Attention Grouping-Based Siamese Object Tracking Network for Hyperspectral Videos
    Li, Wei
    Hou, Zengfu
    Zhou, Jun
    Tao, Ran
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [5] Context-aware Siamese network for object tracking
    Zhang, Jianwei
    Wang, Jingchao
    Zhang, Huanlong
    Miao, Mengen
    Wu, Di
    IET IMAGE PROCESSING, 2023, 17 (01) : 215 - 226
  • [6] Light-Weight Siamese Attention Network Object Tracking for Unmanned Aerial Vehicle
    Cui Zhoujuan
    An Junshe
    Zhang Yufeng
    Cui Tianshu
    ACTA OPTICA SINICA, 2020, 40 (19)
  • [7] Visual Object Tracking by Hierarchical Attention Siamese Network
    Shen, Jianbing
    Tang, Xin
    Dong, Xingping
    Shao, Ling
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (07) : 3068 - 3080
  • [8] Object-Aware Adaptive Convolution Kernel Attention Mechanism in Siamese Network for Visual Tracking
    Yuan, Dongliang
    Li, Qingdang
    Yang, Xiaohui
    Zhang, Mingyue
    Sun, Zhen
    APPLIED SCIENCES-BASEL, 2022, 12 (02):
  • [9] Attention-Aware Network and Multi-Loss Joint Training Method for Vehicle Re-Identification
    Zhou, Hui
    Li, Chen
    Zhang, Lipei
    Song, Wei
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 1330 - 1334
  • [10] Indistinguishable points attention-aware network for infrared small object detection
    Wang, Bo-xiao
    Song, Yan-song
    Dong, Xiao-na
    CHINESE OPTICS, 2024, 17 (03) : 538 - 547