Deep fusion of gray level co-occurrence matrices for lung nodule classification

被引:9
作者
Saihood, Ahmed [1 ,2 ]
Karshenas, Hossein [1 ]
Nilchi, Ahmad Reza Naghsh [1 ]
机构
[1] Univ Isfahan, Fac Comp Engn, Dept Artificial Intelligence, Esfahan, Iran
[2] Univ Thi Qar, Fac Comp Sci & Math, Nasiriyah, Thi Qar, Iraq
来源
PLOS ONE | 2022年 / 17卷 / 09期
基金
英国科研创新办公室;
关键词
NEURAL-NETWORK; COMPUTERIZED DETECTION; PULMONARY NODULES; CANCER; SHAPE; SEGMENTATION; TEXTURE;
D O I
10.1371/journal.pone.0274516
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Lung cancer is a serious threat to human health, with millions dying because of its late diagnosis. The computerized tomography (CT) scan of the chest is an efficient method for early detection and classification of lung nodules. The requirement for high accuracy in analyzing CT scan images is a significant challenge in detecting and classifying lung cancer. In this paper, a new deep fusion structure based on the long short-term memory (LSTM) has been introduced, which is applied to the texture features computed from lung nodules through new volumetric grey-level-co-occurrence-matrices (GLCMs), classifying the nodules into benign, malignant, and ambiguous. Also, an improved Otsu segmentation method combined with the water strider optimization algorithm (WSA) is proposed to detect the lung nodules. WSA-Otsu thresholding can overcome the fixed thresholds and time requirement restrictions in previous thresholding methods. Extended experiments are used to assess this fusion structure by considering 2D-GLCM based on 2D-slices and approximating the proposed 3D-GLCM computations based on volumetric 2.5D-GLCMs. The proposed methods are trained and assessed through the LIDC-IDRI dataset. The accuracy, sensitivity, and specificity obtained for 2D-GLCM fusion are 94.4%, 91.6%, and 95.8%, respectively. For 2.5D-GLCM fusion, the accuracy, sensitivity, and specificity are 97.33%, 96%, and 98%, respectively. For 3D-GLCM, the accuracy, sensitivity, and specificity of the proposed fusion structure reached 98.7%, 98%, and 99%, respectively, outperforming most state-of-the-art counterparts. The results and analysis also indicate that the WSA-Otsu method requires a shorter execution time and yields a more accurate thresholding process.
引用
收藏
页数:26
相关论文
共 83 条
[51]   3D multi-scale deep convolutional neural networks for pulmonary nodule detection [J].
Peng, Haixin ;
Sun, Huacong ;
Guo, Yanfei .
PLOS ONE, 2021, 16 (01)
[52]  
Pradhan A, 2020, 2020 INTERNATIONAL CONFERENCE ON COMPUTATIONAL PERFORMANCE EVALUATION (COMPE-2020), P765, DOI 10.1109/ComPE49325.2020.9200176
[53]  
Prasad J.M.N., 2021, MATER TODAY-PROC, DOI 10.1016/j.matpr.2020.12.1064
[54]   Bat-inspired Metaheuristic Convolutional Neural Network Algorithms for CAD-based Lung Cancer Prediction [J].
Priyadharshini, P. ;
Zoraida, B. S. E. .
JOURNAL OF APPLIED SCIENCE AND ENGINEERING, 2021, 24 (01) :65-71
[55]   Medical image analysis based on deep learning approach [J].
Puttagunta, Muralikrishna ;
Ravi, S. .
MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (16) :24365-24398
[56]  
Riquelme D., 2020, AI-BASEL, V1, P28, DOI [10.3390/ai1010003, DOI 10.3390/AI1010003]
[57]  
Roy TS, 2015, 2015 INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION & AUTOMATION (ICCCA), P1204, DOI 10.1109/CCAA.2015.7148560
[58]   Validating deep learning inference during chest X-ray classification for COVID-19 screening [J].
Sadre, Robbie ;
Sundaram, Baskaran ;
Majumdar, Sharmila ;
Ushizima, Daniela .
SCIENTIFIC REPORTS, 2021, 11 (01)
[59]   MobileNetV2: Inverted Residuals and Linear Bottlenecks [J].
Sandler, Mark ;
Howard, Andrew ;
Zhu, Menglong ;
Zhmoginov, Andrey ;
Chen, Liang-Chieh .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4510-4520
[60]  
Sangamithraa PB, 2016, PROCEEDINGS OF THE 2016 IEEE INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET), P2201, DOI 10.1109/WiSPNET.2016.7566533