Neighborhood Attention Transformer

被引:115
|
作者
Hassani, Ali [1 ,2 ]
Walton, Steven [1 ,2 ]
Li, Jiachen [1 ,2 ]
Li, Shen [4 ]
Shi, Humphrey [1 ,2 ,3 ]
机构
[1] Univ Oregon, SHI Labs, Eugene, OR 97403 USA
[2] UIUC, Champaign, IL 61801 USA
[3] Picsart AI Res PAIR, New York, NY USA
[4] Meta Facebook AI, Menlo Pk, CA USA
关键词
D O I
10.1109/CVPR52729.2023.00599
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present Neighborhood Attention (NA), the first efficient and scalable sliding window attention mechanism for vision. NA is a pixel-wise operation, localizing self attention (SA) to the nearest neighboring pixels, and therefore enjoys a linear time and space complexity compared to the quadratic complexity of SA. The sliding window pattern allows NA's receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, unlike Swin Transformer's Window Self Attention (WSA). We develop NATTEN (Neighborhood Attention Extension), a Python package with efficient C++ and CUDA kernels, which allows NA to run up to 40% faster than Swin's WSA while using up to 25% less memory. We further present Neighborhood Attention Transformer (NAT), a new hierarchical transformer design based on NA that boosts image classification and downstream vision performance. Experimental results on NAT are competitive; NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20K, which is 1.9% ImageNet accuracy, 1.0% COCO mAP, and 2.6% ADE20K mIoU improvement over a Swin model with similar size. To support more research based on sliding window attention, we open source our project and release our checkpoints.
引用
收藏
页码:6185 / 6194
页数:10
相关论文
共 50 条
  • [1] Spectral Spatial Neighborhood Attention Transformer for Hyperspectral Image Classification
    Arshad, Tahir
    Zhang, Junping
    Anyembe, Shibwabo C.
    Mehmood, Aamir
    CANADIAN JOURNAL OF REMOTE SENSING, 2024, 50 (01)
  • [2] Image super-resolution using dilated neighborhood attention transformer
    Chen, Li
    Zuo, Jinnian
    Du, Kai
    Zou, Jinsong
    Yin, Shaoyun
    Wang, Jinyu
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [3] Neighborhood attention transformer multiple instance learning for whole slide image classification
    Aftab, Rukhma
    Yan, Qiang
    Zhao, Juanjuan
    Yong, Gao
    Huajie, Yue
    Urrehman, Zia
    Khalid, Faizi Mohammad
    FRONTIERS IN ONCOLOGY, 2024, 14
  • [4] Multiscale Neighborhood Attention Transformer With Optimized Spatial Pattern for Hyperspectral Image Classification
    Qiao, Xin
    Roy, Swalpa Kumar
    Huang, Weimin
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [5] DNA-T: Deformable Neighborhood Attention Transformer for Irregular Medical Time Series
    Huang, Jianxuan
    Yang, Baoyao
    Yin, Kejing
    Xu, Jingwen
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (07) : 4224 - 4237
  • [6] CAT-Unet: An enhanced U-Net architecture with coordinate attention and skip-neighborhood attention transformer for medical image segmentation
    Ding, Zhiquan
    Zhang, Yuejin
    Zhu, Chenxin
    Zhang, Guolong
    Li, Xiong
    Jiang, Nan
    Que, Yue
    Peng, Yuanyuan
    Guan, Xiaohui
    INFORMATION SCIENCES, 2024, 670
  • [7] Neighborhood Contrastive Transformer for Change Captioning
    Tu, Yunbin
    Li, Liang
    Su, Li
    Lu, Ke
    Huang, Qingming
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 9518 - 9529
  • [8] NA-segformer: A multi-level transformer model based on neighborhood attention for colonoscopic polyp segmentation
    Liu, Dong
    Lu, Chao
    Sun, Haonan
    Gao, Shouping
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [9] AiATrack: Attention in Attention for Transformer Visual Tracking
    Gao, Shenyuan
    Zhou, Chunluan
    Ma, Chao
    Wang, Xinggang
    Yuan, Junsong
    COMPUTER VISION, ECCV 2022, PT XXII, 2022, 13682 : 146 - 164
  • [10] Enhancing Hyperspectral Image Classification for Land Use Land Cover With Dilated Neighborhood Attention Transformer and Crow Search Optimization
    Tejasree, Ganji
    Loganathan, Agilandeeswari
    IEEE ACCESS, 2024, 12 : 59361 - 59385