CCNet: Criss-Cross Attention for Semantic Segmentation

被引:105
作者
Huang, Zilong [1 ]
Wang, Xinggang [1 ]
Wei, Yunchao [2 ]
Huang, Lichao [3 ]
Shi, Humphrey [4 ,5 ]
Liu, Wenyu [1 ]
Huang, Thomas S. [5 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[2] Univ Technol Sydney, Fac Engn & Informat Technol, Ctr Artificial Intelligence, Ultimo, NSW 2007, Australia
[3] Horizon Robot, Beijing, Peoples R China
[4] Univ Oregon, Eugene, OR 97403 USA
[5] Univ Illinois, Champaign, IL 61820 USA
关键词
Semantic segmentation; graph attention; criss-cross network; context modeling; NEURAL-NETWORKS; MODEL;
D O I
10.1109/TPAMI.2020.3007032
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contextual information is vital in visual understanding problems, such as semantic segmentation and object detection. We propose a criss-cross network (CCNet) for obtaining full-image contextual information in a very effective and efficient way. Concretely, for each pixel, a novel criss-cross attention module harvests the contextual information of all the pixels on its criss-cross path. By taking a further recurrent operation, each pixel can finally capture the full-image dependencies. Besides, a category consistent loss is proposed to enforce the criss-cross attention module to produce more discriminative features. Overall, CCNet is with the following merits: 1) GPU memory friendly. Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11x less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85 percent of the non-local block. 3) The state-of-the-art performance. We conduct extensive experiments on semantic segmentation benchmarks including Cityscapes, ADE20K, human parsing benchmark LIP, instance segmentation benchmark COCO, video segmentation benchmark CamVid. In particular, our CCNet achieves the mIoU scores of 81.9, 45.76 and 55.47 percent on the Cityscapes test set, the ADE20K validation set and the LIP validation set respectively, which are the new state-of-the-art results. The source codes are available at https://github.com/speedinghzl/CCNethttps://github.com/speedinghzl/CCNet
引用
收藏
页码:6896 / 6908
页数:13
相关论文
共 67 条
[1]  
Atwood J, 2016, ADV NEUR IN, V29
[2]   A survey of augmented reality [J].
Azuma, RT .
PRESENCE-VIRTUAL AND AUGMENTED REALITY, 1997, 6 (04) :355-385
[3]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[4]   Dense Decoder Shortcut Connections for Single-Pass Semantic Segmentation [J].
Bilinski, Piotr ;
Prisacariu, Victor .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6596-6605
[5]   Segmentation and Recognition Using Structure from Motion Point Clouds [J].
Brostow, Gabriel J. ;
Shotton, Jamie ;
Fauqueur, Julien ;
Cipolla, Roberto .
COMPUTER VISION - ECCV 2008, PT I, PROCEEDINGS, 2008, 5302 :44-+
[6]   Semantic object classes in video: A high-definition ground truth database [J].
Brostow, Gabriel J. ;
Fauqueur, Julien ;
Cipolla, Roberto .
PATTERN RECOGNITION LETTERS, 2009, 30 (02) :88-97
[7]   Deep Spatio-Temporal Random Fields for Efficient Video Segmentation [J].
Chandra, Siddhartha ;
Couprie, Camille ;
Kokkinos, Iasonas .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8915-8924
[8]  
Chen LC., 2014, SEMANTIC IMAGE SEGME, DOI DOI 10.48550/ARXIV.1412.7062
[9]  
Chen LC, 2017, Arxiv, DOI arXiv:1706.05587
[10]  
Chen LC, 2018, ADV NEUR IN, V31