Deep fusion of gray level co-occurrence matrices for lung nodule classification

被引:9
作者
Saihood, Ahmed [1 ,2 ]
Karshenas, Hossein [1 ]
Nilchi, Ahmad Reza Naghsh [1 ]
机构
[1] Univ Isfahan, Fac Comp Engn, Dept Artificial Intelligence, Esfahan, Iran
[2] Univ Thi Qar, Fac Comp Sci & Math, Nasiriyah, Thi Qar, Iraq
来源
PLOS ONE | 2022年 / 17卷 / 09期
基金
英国科研创新办公室;
关键词
NEURAL-NETWORK; COMPUTERIZED DETECTION; PULMONARY NODULES; CANCER; SHAPE; SEGMENTATION; TEXTURE;
D O I
10.1371/journal.pone.0274516
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Lung cancer is a serious threat to human health, with millions dying because of its late diagnosis. The computerized tomography (CT) scan of the chest is an efficient method for early detection and classification of lung nodules. The requirement for high accuracy in analyzing CT scan images is a significant challenge in detecting and classifying lung cancer. In this paper, a new deep fusion structure based on the long short-term memory (LSTM) has been introduced, which is applied to the texture features computed from lung nodules through new volumetric grey-level-co-occurrence-matrices (GLCMs), classifying the nodules into benign, malignant, and ambiguous. Also, an improved Otsu segmentation method combined with the water strider optimization algorithm (WSA) is proposed to detect the lung nodules. WSA-Otsu thresholding can overcome the fixed thresholds and time requirement restrictions in previous thresholding methods. Extended experiments are used to assess this fusion structure by considering 2D-GLCM based on 2D-slices and approximating the proposed 3D-GLCM computations based on volumetric 2.5D-GLCMs. The proposed methods are trained and assessed through the LIDC-IDRI dataset. The accuracy, sensitivity, and specificity obtained for 2D-GLCM fusion are 94.4%, 91.6%, and 95.8%, respectively. For 2.5D-GLCM fusion, the accuracy, sensitivity, and specificity are 97.33%, 96%, and 98%, respectively. For 3D-GLCM, the accuracy, sensitivity, and specificity of the proposed fusion structure reached 98.7%, 98%, and 99%, respectively, outperforming most state-of-the-art counterparts. The results and analysis also indicate that the WSA-Otsu method requires a shorter execution time and yields a more accurate thresholding process.
引用
收藏
页数:26
相关论文
共 83 条
[81]   Multimodel Feature Reinforcement Framework Using Moore-Penrose Inverse for Big Data Analysis [J].
Zhang, Wandong ;
Wu, Q. M. Jonathan ;
Yang, Yimin ;
Akilan, Thangarajah .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (11) :5008-5021
[82]   A Fast 2-D Otsu lung tissue image segmentation algorithm based on improved PSO [J].
Zhao, Yanqiao ;
Yu, Xiaoyang ;
Wu, Haibin ;
Zhou, Yong ;
Sun, Xiaoming ;
Yu, Shuang ;
Yu, Shuchun ;
Liu, He .
MICROPROCESSORS AND MICROSYSTEMS, 2021, 80
[83]   Learning Transferable Architectures for Scalable Image Recognition [J].
Zoph, Barret ;
Vasudevan, Vijay ;
Shlens, Jonathon ;
Le, Quoc V. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8697-8710