SMFDNet: spatial and multi-frequency domain network for OCT angiography retinal vessel segmentation

被引:1
作者
Li, Sien [1 ]
Ma, Fei [1 ]
Yan, Fen [2 ]
Meng, Jing [1 ]
Guo, Yanfei [1 ]
Liu, Hongjuan [1 ]
Cheng, Ronghua [1 ]
机构
[1] Qufu Normal Univ, Sch Comp Sci, Rizhao, Peoples R China
[2] Qufu Peoples Hosp, Ultrasound Med Dept, Qufu, Peoples R China
基金
美国国家科学基金会;
关键词
Segmentation; Deep learning; OCT angiography; Retinal image segmentation; IMAGE SEGMENTATION;
D O I
10.1007/s11227-025-06985-6
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Optical coherence tomography angiography (OCTA) is a non-invasive imaging technique, and automatic segmentation of retinal vessels is crucial for understanding ocular diseases and making informed clinical decisions. However, the automatic segmentation of retinal vessels in OCTA images is particularly challenging due to several inherent issues. Retinal vessels often exhibit low contrast against the surrounding tissue, making it difficult to distinguish them clearly. Additionally, the complex and irregular branching structures of retinal vessels, along with the presence of noise and artefacts in OCTA images, further complicate the segmentation task. To address these challenges, we propose a novel method, the spatial and multi-frequency domain-based segmentation network (SMFDNet), specifically designed for vessel segmentation in OCTA fundus images. This network effectively combines spatial and multi-frequency domain features to enhance the segmentation accuracy of retinal vessels. To demonstrate the superiority of our proposed network, we conduct experiments on the Retinal Vessels Images in OCTA (REVIO), Retinal OCT-Angiography Vessel Segmentation (ROSE) and Optical Coherence Tomography Angiography-500 (OCTA-500) datasets. The extensive experimental results show that our approach consistently outperforms state-of-the-art methods, particularly in handling low contrast and complex vessel structures. The codes are available at https://kyanbis.github.io/SMFDNet/.
引用
收藏
页数:19
相关论文
共 36 条
[1]   Widefield Optical Coherence Tomography Angiography in Diabetic Retinopathy [J].
Amato, Alessia ;
Nadin, Francesco ;
Borghesan, Federico ;
Cicinelli, Maria Vittoria ;
Chatziralli, Irini ;
Sadiq, Saena ;
Mirza, Rukhsana ;
Bandello, Francesco .
JOURNAL OF DIABETES RESEARCH, 2020, 2020
[2]   Deep-Learning-Based Fast Optical Coherence Tomography (OCT) Image Denoising for Smart Laser Osteotomy [J].
Bayhaqi, Yakub A. ;
Hamidi, Arsham ;
Canbaz, Ferda ;
Navarini, Alexander A. ;
Cattin, Philippe C. ;
Zam, Azhar .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (10) :2615-2628
[3]   Multi-task OCTA image segmentation with innovative dimension compression [J].
Cao, Guogang ;
Peng, Zeyu ;
Zhou, Zhilin ;
Wu, Yan ;
Zhang, Yunqing ;
Yan, Rugang .
PATTERN RECOGNITION, 2025, 159
[4]   DenseUNet plus : A novel hybrid segmentation approach based on multi-modality images for brain tumor segmentation [J].
Cetiner, Halit ;
Metlek, Sedat .
JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2023, 35 (08)
[5]  
Chen J., 2021, PREPRINT
[6]   Frequency Guidance Matters in Few-Shot Learning [J].
Cheng, Hao ;
Yang, Siyuan ;
Zhou, Joey Tianyi ;
Guo, Lanqing ;
Wen, Bihan .
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, :11780-11790
[7]  
de Carlo Talisa E, 2015, Int J Retina Vitreous, V1, P5
[8]   A Multi-Branch Convolutional Neural Network for Screening and Staging of Diabetic Retinopathy Based on Wide-Field Optical Coherence Tomography Angiography [J].
Dong, B. ;
Wang, X. ;
Qiang, X. ;
Du, F. ;
Gao, L. ;
Wu, Q. ;
Cao, G. ;
Dai, C. .
IRBM, 2022, 43 (06) :614-620
[9]   Spatially adaptive blind deconvolution methods for optical coherence tomography [J].
Dong, Wenxue ;
Du, Yina ;
Xu, Jingjiang ;
Dong, Feng ;
Ren, Shangjie .
COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 147
[10]   ResAt-UNet: A U-Shaped Network Using ResNet and Attention Module for Image Segmentation of Urban Buildings [J].
Fan, Zhiyong ;
Liu, Yu ;
Xia, Min ;
Hou, Jianmin ;
Yan, Fei ;
Zang, Qiang .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2023, 16 :2094-2111