ScopeViT: Scale-Aware Vision Transformer

被引:4
|
作者
Nie, Xuesong [1 ]
Jin, Haoyuan [1 ]
Yan, Yunfeng [1 ]
Chen, Xi [2 ]
Zhu, Zhihang [1 ]
Qi, Donglian [1 ]
机构
[1] Zhejiang Univ, Hangzhou 310027, Peoples R China
[2] Univ Hong Kong, Hong Kong 999077, Peoples R China
关键词
Vision transformer; Multi-scale features; Efficient attention mechanism;
D O I
10.1016/j.patcog.2024.110470
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-scale features are essential for various vision tasks, such as classification, detection, and segmentation. Although Vision Transformers (ViTs) show remarkable success in capturing global features within an image, how to leverage multi-scale features in Transformers is not well explored. This paper proposes a scale-aware vision Transformer called ScopeViT that efficiently captures multi-granularity representations. Two novel attention with lightweight computation are introduced: Multi-Scale Self-Attention (MSSA) and Global-Scale Dilated Attention (GSDA). MSSA embeds visual tokens with different receptive fields into distinct attention heads, allowing the model to perceive various scales across the network. GSDA enhances model understanding of the global context through token-dilation operation, which reduces the number of tokens involved in attention computations. This dual attention method enables ScopeViT to "see"various scales throughout the entire network and effectively learn inter -object relationships, reducing heavy quadratic computational complexity. Extensive experiments demonstrate that ScopeViT achieves competitive complexity/accuracy tradeoffs compared to existing networks across a wide range of visual tasks. On the ImageNet-1K dataset, ScopeViT achieves a top-1 accuracy of 81.1%, using only 7.4M parameters and 2.0G FLOPs. Our approach outperforms Swin (ViT-based) by 1.9% accuracy while saving 42% of the parameters, outperforms MobileViTv2 (Hybridbased) with a 0.7% accuracy gain while using 50% of the computations, and also beats ConvNeXt V2 (ConvNet-based) by 0.8% with fewer parameters.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] DEEP SCALE-AWARE IMAGE SMOOTHING
    Li, Jiachun
    Qin, Kunkun
    Xu, Ruotao
    Ji, Hui
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2105 - 2109
  • [22] Scale-Aware Spatially Guided Mapping
    Hao, Shijie
    Guo, Yanrong
    Hong, Richang
    Wang, Meng
    IEEE MULTIMEDIA, 2016, 23 (03) : 34 - 42
  • [23] Scale-Aware RPN for Vehicle Detection
    Ding, Lu
    Wang, Yong
    Laganiere, Robert
    Luo, Xinbin
    Fu, Shan
    ADVANCES IN VISUAL COMPUTING, ISVC 2018, 2018, 11241 : 487 - 499
  • [24] MonoPSTR: Monocular 3-D Object Detection With Dynamic Position and Scale-Aware Transformer
    Yang, Fan
    He, Xuan
    Chen, Wenrui
    Zhou, Pengjie
    Li, Zhiyong
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 1
  • [25] Attention to Scale: Scale-aware Semantic Image Segmentation
    Chen, Liang-Chieh
    Yang, Yi
    Wang, Jiang
    Xu, Wei
    Yuille, Alan L.
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 3640 - 3649
  • [26] Scale-Aware Transformers for Diagnosing Melanocytic Lesions
    Wu, Wenjun
    Mehta, Sachin
    Nofallah, Shima
    Knezevich, Stevan
    May, Caitlin J.
    Chang, Oliver H.
    Elmore, Joann G.
    Shapiro, Linda G.
    IEEE ACCESS, 2021, 9 : 163526 - 163541
  • [27] Scale-Aware Trident Networks for Object Detection
    Li, Yanghao
    Chen, Yuntao
    Wang, Naiyan
    Zhang, Zhaoxiang
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6053 - 6062
  • [28] Fast and faithful scale-aware image filters
    Shin Yoshizawa
    Hideo Yokota
    The Visual Computer, 2021, 37 : 3051 - 3062
  • [29] Robust Scale-Aware Stereo Matching Network
    Okae J.
    Li B.
    Du J.
    Hu Y.
    IEEE Transactions on Artificial Intelligence, 2022, 3 (02): : 244 - 253
  • [30] Scale-aware Automatic Augmentation for Object Detection
    Chen, Yukang
    Li, Yanwei
    Kong, Tao
    Qi, Lu
    Chu, Ruihang
    Li, Lei
    Jia, Jiaya
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 9558 - 9567