A Structure-Aware Relation Network for Thoracic Diseases Detection and Segmentation

被引:32
作者
Lian, Jie [1 ]
Liu, Jingyu [2 ]
Zhang, Shu [1 ]
Gao, Kai [1 ]
Liu, Xiaoqing [1 ]
Zhang, Dingwen [3 ]
Yu, Yizhou [4 ]
机构
[1] Deepwise Artificial Intelligence Lab, Beijing 100080, Peoples R China
[2] Peking Univ, Sch Elect Engn & Comp Sci, Beijing 100871, Peoples R China
[3] Northwestern Polytech Univ, Sch Automat, Brain & Artificial Intelligence Lab, Xian 710072, Peoples R China
[4] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Thoracic diseases detection and segmentation; SAR-Net; ChestX-Det; DATASET;
D O I
10.1109/TMI.2021.3070847
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Instance level detection and segmentation of thoracic diseases or abnormalities are crucial for automatic diagnosis in chest X-ray images. Leveraging on constant structure and disease relations extracted from domain knowledge, we propose a structure-aware relation network (SAR-Net) extending Mask R-CNN. The SAR-Net consists of three relation modules: 1. the anatomical structure relation module encoding spatial relations between diseases and anatomical parts. 2. the contextual relation module aggregating clues based on query-key pair of disease RoI and lung fields. 3. the disease relation module propagating co-occurrence and causal relations into disease proposals. Towards making a practical system, we also provide ChestX-Det, a chest X-Ray dataset with instance-level annotations (boxes and masks). ChestX-Det is a subset of the public dataset NIH ChestX-ray14. It contains similar to 3500 images of 13 common disease categories labeled by three board-certified radiologists. We evaluate our SAR-Net on it and another dataset DR-Private. Experimental results show that it can enhance the strong baseline of MaskR-CNN with significant improvements. The ChestX-Det is released at https:// github.com/Deepwise-AILab/ChestX-Det-Dataset.
引用
收藏
页码:2042 / 2052
页数:11
相关论文
共 42 条
[1]  
[Anonymous], 2015, PROC ADVNEURAL INF P
[2]  
[Anonymous], 2017, ARXIV170106452
[3]  
[Anonymous], 2017, IEEE INT C COMPUT VI, DOI [10.1109/iccv.201, DOI 10.1109/ICCV.2017.322]
[4]  
[Anonymous], 2017, ARXIV171200996
[5]   Deep hiearchical multi-label classification applied to chest X-ray abnormality taxonomies [J].
Chen, Haomin ;
Miao, Shun ;
Xu, Daguang ;
Hager, Gregory D. ;
Harrison, Adam P. .
MEDICAL IMAGE ANALYSIS, 2020, 66
[6]   Deformable Convolutional Networks [J].
Dai, Jifeng ;
Qi, Haozhi ;
Xiong, Yuwen ;
Li, Yi ;
Zhang, Guodong ;
Hu, Han ;
Wei, Yichen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :764-773
[7]   CenterNet: Keypoint Triplets for Object Detection [J].
Duan, Kaiwen ;
Bai, Song ;
Xie, Lingxi ;
Qi, Honggang ;
Huang, Qingming ;
Tian, Qi .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6568-6577
[8]   The Pascal Visual Object Classes (VOC) Challenge [J].
Everingham, Mark ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) :303-338
[9]   Dynamic Few-Shot Visual Learning without Forgetting [J].
Gidaris, Spyros ;
Komodakis, Nikos .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4367-4375
[10]  
Guan Q, 2018, ARXIV180109927