Improvement of chest X-ray image segmentation accuracy based on FCA-Net

被引:2
作者
Wahyuningrum, Rima Tri [1 ,4 ]
Yunita, Indah [1 ]
Siradjuddin, Indah Agustien [1 ]
Satoto, Budi Dwi [1 ]
Sari, Amillia Kartika [2 ]
Sensusiati, Anggraini Dwi [3 ]
机构
[1] Univ Trunojoyo Madura, Fac Engn, Dept Informat Engn, Bangkalan, Indonesia
[2] Univ Airlangga, Fac Vocat Studies, Dept Hlth, Surabaya, Indonesia
[3] Univ Airlangga, Med Fac, Dept Radiol, Surabaya, Indonesia
[4] Univ Trunojoyo, Madura Inst, Fac Engn, Dept Informat Engn, Bangkalan 69162, Indonesia
来源
COGENT ENGINEERING | 2023年 / 10卷 / 01期
关键词
lung segmentation; chest X-ray; FCA-Net; attention module; deep learning;
D O I
10.1080/23311916.2023.2229571
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Medical image segmentation is a crucial stage in computer vision and image processing to help the later-stage diagnosis process become more accurate. Because medical image segmentation, such as X-ray, can extract tissue, organs, and pathological structures. However, medical image processing, primarily in the segmentation process, has significant challenges regarding feature representation. Because medical images have different characteristics than other images related to contrast, blur, and noise. This study proposes the use of lung segmentation on chest X-ray images based on deep learning with the FCA-Net (Fully Convolutional Attention Network) architecture. In addition, attention modules, namely spatial attention and channel attention, are added to the Res2Net encoder so that it is expected to be able to represent features better. This research was conducted on chest X-ray images from Qatar University contained in the Kaggle repository. A chest x-ray image measuring 256 x 256 pixels and as many as 1500 images were then divided into 10% testing data and 90% training data. The training data will then be processed in K-Fold Cross validation from K = 2 until K = 10. The experiment was conducted with scenarios that used spatial attention, channel attention, and a combination of spatial and channel attention. The best test results in this study were using a variety of spatial attention and channel attention in the division of K-Fold with a value of K = 5 with a DSC (Dice Similarity Coefficient) value in the testing data of 97.24% and IoU (Intersection over Union) in the testing data of 94.66%. This accuracy result is better than the UNet++, DeepLabV3+, and SegNet architectures.
引用
收藏
页数:16
相关论文
共 16 条
[1]   Fully convolutional attention network for biomedical image segmentation [J].
Cheng, Junlong ;
Tian, Shengwei ;
Yu, Long ;
Lu, Hongchun ;
Lv, Xiaoyi .
ARTIFICIAL INTELLIGENCE IN MEDICINE, 2020, 107
[2]  
Chest X-ray, 2020, CHEST XRAY
[3]   COVID-19 infection map generation and detection from chest X-ray images [J].
Degerli, Aysen ;
Ahishali, Mete ;
Yamac, Mehmet ;
Kiranyaz, Serkan ;
Chowdhury, Muhammad E. H. ;
Hameed, Khalid ;
Hamid, Tahir ;
Mazhar, Rashid ;
Gabbouj, Moncef .
HEALTH INFORMATION SCIENCE AND SYSTEMS, 2021, 9 (01)
[4]   Res2Net: A New Multi-Scale Backbone Architecture [J].
Gao, Shang-Hua ;
Cheng, Ming-Ming ;
Zhao, Kai ;
Zhang, Xin-Yu ;
Yang, Ming-Hsuan ;
Torr, Philip .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (02) :652-662
[5]   An Efficient Variant of Fully-Convolutional Network for Segmenting Lung Fields from Chest Radiographs [J].
Hooda, Rahul ;
Mittal, Ajay ;
Sofat, Sanjeev .
WIRELESS PERSONAL COMMUNICATIONS, 2018, 101 (03) :1559-1579
[6]  
Intisar Rizwan I. Haque, 2020, Informatics in Medicine Unlocked, V18, DOI 10.1016/j.imu.2020.100297
[7]  
Kalinovsky A., 2016, 13 INT C PATT REC IN, P21
[8]   LF-SegNet: A Fully Convolutional Encoder-Decoder Network for Segmenting Lung Fields from Chest Radiographs [J].
Mittal, Ajay ;
Hooda, Rahul ;
Sofat, Sanjeev .
WIRELESS PERSONAL COMMUNICATIONS, 2018, 101 (01) :511-529
[9]   Segmentation of Lungs in Chest X-Ray Image Using Generative Adversarial Networks [J].
Munawar, Faizan ;
Azmat, Shoaib ;
Iqbal, Talha ;
Gronlund, Christer ;
Ali, Hazrat .
IEEE ACCESS, 2020, 8 :153535-153545
[10]  
RSUA Radiologi, 2023, MENDELEY DATA, VV1, DOI [https://doi.org/10.17632/2jg8vfdmpm.1, DOI 10.17632/2JG8VFDMPM.1]