Retinal artery/vein classification by multi-channel multi-scale fusion network

被引:4
|
作者
Yi, Junyan [1 ]
Chen, Chouyu [1 ]
Yang, Gang [2 ]
机构
[1] Beijing Univ Civil Engn & Architecture, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Renmin Univ China, Sch Informat, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
A/V classification; Vessel segmentation; Multi-channel; Feature fusion; VESSEL SEGMENTATION; U-NET;
D O I
10.1007/s10489-023-04939-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The automatic artery/vein (A/V) classification in retinal fundus images plays a significant role in detecting vascular abnormalities and could speed up the diagnosis of various systemic diseases. Deep-learning methods have been extensively employed in this task. However, due to the lack of annotated data and the serious data imbalance, the performance of the existing methods is constricted. To address these limitations, we propose a novel multi-channel multi-scale fusion network (MMF-Net) that employs the enhancement of vessel structural information to constrain the A/V classification. First, the newly designed multi-channel (MM) module could extract the vessel structure from the original fundus image by the frequency filters, increasing the proportion of blood vessel pixels and reducing the influence caused by the background pixels. Second, the MMF-Net introduces a multi-scale transformation (MT) module, which could efficiently extract the information from the multi-channel feature representations. Third, the MMF-Net utilizes a multi-feature fusion (MF) module to improve the robustness of A/V classification by splitting and reorganizing the pixel feature from different scales. We validate our results on several public benchmark datasets. The experimental results show that the proposed method could achieve the best result compared with the existing state-of-the-art methods, which demonstrate the superior performance of the MMF-Net. The highly optimized Python implementations of our method is released at: https://github.com/chenchouyu/MMF_Net.
引用
收藏
页码:26400 / 26417
页数:18
相关论文
共 50 条
  • [21] ParaLkResNet: an efficient multi-scale image classification network
    Yu, Tongshuai
    Liu, Ye
    Liu, Hao
    Chen, Ji
    Wang, Xing
    VISUAL COMPUTER, 2024, 40 (07): : 5057 - 5066
  • [22] Multi-Task Segmentation and Classification Network for Artery/Vein Classification in Retina Fundus
    Yi, Junyan
    Chen, Chouyu
    ENTROPY, 2023, 25 (08)
  • [23] Multi-scale LBP fusion with the contours from deep CellNNs for texture classification
    Chang, Mingzhe
    Ji, Luping
    Zhu, Jiewen
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [24] Multi-scale Bottleneck Residual Network for Retinal Vessel Segmentation
    Peipei Li
    Zhao Qiu
    Yuefu Zhan
    Huajing Chen
    Sheng Yuan
    Journal of Medical Systems, 47
  • [25] Multi-scale Bottleneck Residual Network for Retinal Vessel Segmentation
    Li, Peipei
    Qiu, Zhao
    Zhan, Yuefu
    Chen, Huajing
    Yuan, Sheng
    JOURNAL OF MEDICAL SYSTEMS, 2023, 47 (01)
  • [26] Multi-Scale Feature Fusion and Advanced Representation Learning for Multi Label Image Classification
    Zhong, Naikang
    Lin, Xiao
    Du, Wen
    Shi, Jin
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (03):
  • [27] A Deformable and Multi-Scale Network with Self-Attentive Feature Fusion for SAR Ship Classification
    Chen, Peng
    Zhou, Hui
    Li, Ying
    Liu, Bingxin
    Liu, Peng
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2024, 12 (09)
  • [28] A diffusion model multi-scale feature fusion network for imbalanced medical image classification research
    Zhu, Zipiao
    Liu, Yang
    Yuan, Chang-An
    Qin, Xiao
    Yang, Feng
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 256
  • [29] Multi-scale depth information fusion network for image dehazing
    Fan, Guodong
    Hua, Zhen
    Li, Jinjiang
    APPLIED INTELLIGENCE, 2021, 51 (10) : 7262 - 7280
  • [30] A multi-scale fusion and dual attention network for crowd counting
    De Zhang
    Yiting Wang
    Xiaoping Zhou
    Liangliang Su
    Multimedia Tools and Applications, 2025, 84 (13) : 11269 - 11294