CCNet: Criss-Cross Attention for Semantic Segmentation

被引:101
|
作者
Huang, Zilong [1 ]
Wang, Xinggang [1 ]
Wei, Yunchao [2 ]
Huang, Lichao [3 ]
Shi, Humphrey [4 ,5 ]
Liu, Wenyu [1 ]
Huang, Thomas S. [5 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[2] Univ Technol Sydney, Fac Engn & Informat Technol, Ctr Artificial Intelligence, Ultimo, NSW 2007, Australia
[3] Horizon Robot, Beijing, Peoples R China
[4] Univ Oregon, Eugene, OR 97403 USA
[5] Univ Illinois, Champaign, IL 61820 USA
关键词
Semantic segmentation; graph attention; criss-cross network; context modeling; NEURAL-NETWORKS; MODEL;
D O I
10.1109/TPAMI.2020.3007032
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contextual information is vital in visual understanding problems, such as semantic segmentation and object detection. We propose a criss-cross network (CCNet) for obtaining full-image contextual information in a very effective and efficient way. Concretely, for each pixel, a novel criss-cross attention module harvests the contextual information of all the pixels on its criss-cross path. By taking a further recurrent operation, each pixel can finally capture the full-image dependencies. Besides, a category consistent loss is proposed to enforce the criss-cross attention module to produce more discriminative features. Overall, CCNet is with the following merits: 1) GPU memory friendly. Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11x less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85 percent of the non-local block. 3) The state-of-the-art performance. We conduct extensive experiments on semantic segmentation benchmarks including Cityscapes, ADE20K, human parsing benchmark LIP, instance segmentation benchmark COCO, video segmentation benchmark CamVid. In particular, our CCNet achieves the mIoU scores of 81.9, 45.76 and 55.47 percent on the Cityscapes test set, the ADE20K validation set and the LIP validation set respectively, which are the new state-of-the-art results. The source codes are available at https://github.com/speedinghzl/CCNethttps://github.com/speedinghzl/CCNet
引用
收藏
页码:6896 / 6908
页数:13
相关论文
共 50 条
  • [1] CCNet: Criss-Cross Attention for Semantic Segmentation
    Huang, Zilong
    Wang, Xinggang
    Huang, Lichao
    Huang, Chang
    Wei, Yunchao
    Liu, Wenyu
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 603 - 612
  • [2] Criss-cross
    van der Nagel, R.
    van Dijk, V. F.
    NETHERLANDS HEART JOURNAL, 2018, 26 (05) : 280 - +
  • [3] CRISS-CROSS
    MARTARELLA, F
    NEW REPUBLIC, 1978, 178 (25) : 7 - 7
  • [4] Criss-cross
    R. van der Nagel
    V. F. van Dijk
    Netherlands Heart Journal, 2018, 26 : 280 - 280
  • [5] Criss-cross
    R. van der Nagel
    V. F. van Dijk
    Netherlands Heart Journal, 2018, 26 (5) : 283 - 284
  • [6] ATCC: Accurate tracking by criss-cross location attention
    Wu, Yong
    Liu, Zhi
    Zhou, Xiaofei
    Ye, Linwei
    Wang, Yang
    IMAGE AND VISION COMPUTING, 2021, 111
  • [7] Criss-cross Puzzle
    刘一文
    中学生百科, 2007, (14) : 52 - 53
  • [8] A criss-cross heart
    Dogan, Vehbi
    Ozgur, Senem
    Ertugrul, Ilker
    Yoldas, Tamer
    Koc, Murat
    Orun, Utku A.
    Karademir, Selmin
    TURK GOGUS KALP DAMAR CERRAHISI DERGISI-TURKISH JOURNAL OF THORACIC AND CARDIOVASCULAR SURGERY, 2016, 24 (01): : 117 - 121
  • [9] Criss-cross Puzzle
    刘一文
    中学生百科, 2007, (11) : 46 - 46
  • [10] Criss-cross Puzzle
    刘一文
    中学生百科, 2007, (08) : 50 - 50