Bottleneck Transformers for Visual Recognition

被引:830
|
作者
Srinivas, Aravind [1 ]
Lin, Tsung-Yi [2 ]
Parmar, Niki [2 ]
Shlens, Jonathon [2 ]
Abbeel, Pieter [1 ]
Vaswani, Ashish [2 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Google Res, Mountain View, CA USA
关键词
D O I
10.1109/CVPR46437.2021.01625
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt [67] evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to 1.64x faster in "compute"(1) time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future research in self-attention models for vision.(2)
引用
收藏
页码:16514 / 16524
页数:11
相关论文
共 50 条
  • [1] Is there a serial bottleneck in visual object recognition?
    Popovkina, Dina, V
    Palmer, John
    Moore, Cathleen M.
    Boynton, Geoffrey M.
    JOURNAL OF VISION, 2021, 21 (03): : 1 - 21
  • [2] AutoFormer: Searching Transformers for Visual Recognition
    Chen, Minghao
    Peng, Houwen
    Fu, Jianlong
    Ling, Haibin
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 12250 - 12260
  • [3] DEEP COMPLEMENTARY BOTTLENECK FEATURES FOR VISUAL SPEECH RECOGNITION
    Petridis, Stavros
    Pantic, Maja
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2304 - 2308
  • [4] EXTRACTING DEEP BOTTLENECK FEATURES FOR VISUAL SPEECH RECOGNITION
    Sui, Chao
    Togneri, Roberto
    Bennamoun, Mohammed
    2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP), 2015, : 1518 - 1522
  • [5] A neural basis of the serial bottleneck in visual word recognition
    Strother, Lars
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2019, 116 (20) : 9699 - 9700
  • [6] Visual word recognition: Evidence for a serial bottleneck in lexical access
    White, Alex L.
    Palmer, John
    Boynton, Geoffrey M.
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2020, 82 (04) : 2000 - 2017
  • [7] AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
    Chen, Shoufa
    Ge, Chongjian
    Tong, Zhan
    Wang, Jiangliu
    Song, Yibing
    Wang, Jue
    Luo, Ping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] Information Bottleneck Domain Adaptation with Privileged Information for Visual Recognition
    Motiian, Saeid
    Doretto, Gianfranco
    COMPUTER VISION - ECCV 2016, PT VII, 2016, 9911 : 630 - 647
  • [9] Information Bottleneck Learning Using Privileged Information for Visual Recognition
    Motiian, Saeid
    Piccirilli, Marco
    Adjeroh, Donald A.
    Doretto, Gianfranco
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1496 - 1505
  • [10] Visual word recognition: Evidence for a serial bottleneck in lexical access
    Alex L. White
    John Palmer
    Geoffrey M. Boynton
    Attention, Perception, & Psychophysics, 2020, 82 : 2000 - 2017