UniFormer: Unifying Convolution and Self-Attention for Visual Recognition

被引:162
|
作者
Li, Kunchang [1 ,2 ]
Wang, Yali [1 ,4 ]
Zhang, Junhao [3 ]
Gao, Peng [4 ]
Song, Guanglu [5 ]
Liu, Yu [5 ]
Li, Hongsheng [6 ]
Qiao, Yu [1 ,4 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, ShenZhen Key Lab Comp Vis & Pattern Recognit, Shenzhen 518055, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Natl Univ Singapore, Singapore 119077, Singapore
[4] Shanghai Artificial Intelligence Lab, Shanghai 200232, Peoples R China
[5] SenseTime Res, Shanghai 200233, Peoples R China
[6] Chinese Univ Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
UniFormer; convolution neural network; transformer; self-attention; visual recognition;
D O I
10.1109/TPAMI.2023.3282631
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is a challenging task to learn discriminative representation from images and videos, due to large local redundancy and complex global dependency in these visual data. Convolution neural networks (CNNs) and vision transformers (ViTs) have been two dominant frameworks in the past few years. Though CNNs can efficiently decrease local redundancy by convolution within a small neighborhood, the limited receptive field makes it hard to capture global dependency. Alternatively, ViTs can effectively capture long-range dependency via self-attention, while blind similarity comparisons among all the tokens lead to high redundancy. To resolve these problems, we propose a novel Unified transFormer (UniFormer), which can seamlessly integrate the merits of convolution and self-attention in a concise transformer format. Different from the typical transformer blocks, the relation aggregators in our UniFormer block are equipped with local and global token affinity respectively in shallow and deep layers, allowing tackling both redundancy and dependency for efficient and effective representation learning. Finally, we flexibly stack our blocks into a new powerful backbone, and adopt it for various vision tasks from image to video domain, from classification to dense prediction. Without any extra training data, our UniFormer achieves 86.3 top-1 accuracy on ImageNet-1 K classification task. With only ImageNet-1 K pre-training, it can simply achieve state-of-the-art performance in a broad range of downstream tasks. It obtains 82.9/84.8 top-1 accuracy on Kinetics-400/600, 60.9/71.2 top-1 accuracy on Something-Something V1/V2 video classification tasks, 53.8 box AP and 46.4 mask AP on COCO object detection task, 50.8 mIoU on ADE20 K semantic segmentation task, and 77.4 AP on COCO pose estimation task. Moreover, we build an efficient UniFormer with a concise hourglass design of token shrinking and recovering, which achieves 2-4xhigher throughput than the recent lightweight models.
引用
收藏
页码:12581 / 12600
页数:20
相关论文
共 50 条
  • [41] Learning spatial self-attention information for visual tracking
    Li, Shengwu
    Zhang, Xuande
    Xiong, Jing
    Ning, Chenjing
    Zhang, Mingke
    IET IMAGE PROCESSING, 2022, 16 (01) : 49 - 60
  • [42] Context Matters: Self-Attention for Sign Language Recognition
    Slimane, Fares Ben
    Bouguessa, Mohamed
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7884 - 7891
  • [43] ESAformer: Enhanced Self-Attention for Automatic Speech Recognition
    Li, Junhua
    Duan, Zhikui
    Li, Shiren
    Yu, Xinmei
    Yang, Guangguang
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 471 - 475
  • [44] A lightweight transformer with linear self-attention for defect recognition
    Zhai, Yuwen
    Li, Xinyu
    Gao, Liang
    Gao, Yiping
    ELECTRONICS LETTERS, 2024, 60 (17)
  • [45] NEPALI SPEECH RECOGNITION USING SELF-ATTENTION NETWORKS
    Joshi, Basanta
    Shrestha, Rupesh
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2023, 19 (06): : 1769 - 1784
  • [46] Finger Vein Recognition Based on ResNet With Self-Attention
    Zhang, Zhibo
    Chen, Guanghua
    Zhang, Weifeng
    Wang, Huiyang
    IEEE ACCESS, 2024, 12 : 1943 - 1951
  • [47] Multimodal cooperative self-attention network for action recognition
    Zhong, Zhuokun
    Hou, Zhenjie
    Liang, Jiuzhen
    Lin, En
    Shi, Haiyong
    IET IMAGE PROCESSING, 2023, 17 (06) : 1775 - 1783
  • [48] Unifying topological structure and self-attention mechanism for node classification in directed networks
    Peng, Yue
    Xia, Jiwen
    Liu, Dafeng
    Liu, Miao
    Xiao, Long
    Shi, Benyun
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [49] Gated Convolution and Stacked Self-Attention Encoder-Decoder-Based Model for Offline Handwritten Ethiopic Text Recognition
    Tadesse, Direselign Addis
    Liu, Chuan-Ming
    Ta, Van-Dai
    INFORMATION, 2023, 14 (12)
  • [50] Long-Tailed Visual Recognition via Improved Cross-Window Self-Attention and TrivialAugment
    Song, Ying
    Li, Mengxing
    Wang, Bo
    IEEE ACCESS, 2023, 11 : 49601 - 49610