Progressive Feature Enhancement Network for Automated Colorectal Polyp Segmentation

被引:2
|
作者
Yue, Guanghui [1 ]
Xiao, Houlu [1 ]
Zhou, Tianwei [2 ]
Tan, Songbai [2 ]
Liu, Yun [3 ]
Yan, Weiqing [4 ]
机构
[1] Shenzhen Univ, Guangdong Key Lab Biomed Measurements & Ultrasound, Natl Reg Key Technol Engn Lab Med Ultrasound, Sch Biomed Engn,Med Sch, Shenzhen 518054, Peoples R China
[2] Shenzhen Univ, Coll Management, Shenzhen 518060, Peoples R China
[3] Liaoning Univ, Coll Informat, Shenyang 110036, Peoples R China
[4] Yantai Univ, Sch Comp & Control Engn, Yantai 261400, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Transformers; Image segmentation; Accuracy; Task analysis; Medical diagnostic imaging; Decoding; Deep neural network; colorectal polyp segmentation; feature enhancement; colonoscopy image; computer-aided diagnosis; VALIDATION; REFINEMENT; ATTENTION;
D O I
10.1109/TASE.2024.3430896
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, colorectal polyp segmentation has attracted increasing attention in academia and industry. Although most existing methods can achieve commendable outcomes, they often confront difficulty when localizing challenging polyps with complex background, variable shape/size, and ambiguous boundary, because of the limitations in modeling global context and in cross-layer feature interaction. To cope with these challenges, this paper proposes a novel Progressive Feature Enhancement Network (PFENet) for polyp segmentation. Specifically, PFENet follows an encoder-decoder structure and utilizes the pyramid vision transformer as the encoder to capture multi-scale long-term dependencies at different stages. A cross-stage feature enhancement (CFE) module is embedded in each stage. The CFE module enhances the feature representation ability from interaction among adjacent stages, which helps integrate scale information for recognizing polyps with complex background and variable shape/size. In addition, a foreground boundary co-enhancement (FBC) module is used at each decoder to simultaneously enhance the foreground and boundary information by incorporating the output of the adjacent high stage and the coarse segmentation map, which is generated by fusing features of all four stages via a coarse map generation module. Through top-down connections of FBC modules, PFENet can progressively refine the prediction in a coarse-to-fine manner. Extensive experiments show the effectiveness of our PFENet in the polyp segmentation task, with the mIoU and mDic values over 0.886 and 0.931 tested on two in-domain datasets and over 0.735 and 0.809 tested on three out-of-domain datasets. Note to Practitioners-Automated and accurate polyp segmentation in colonoscopy images is a critical prerequisite for subsequent detection, removal, and diagnosis of polyps in clinical practice. This paper proposes a novel deep neural network for polyp segmentation, termed PFENet, with a CFE module to enhance the feature representation ability for better capturing polyps with complex background and variable shape/size, and a FBC module to simultaneously enhance the foreground and boundary information on the feature representation provided by the CFE module. Qualitative and quantitative results on five public datasets show that our PFENet yields accurate predictions and is superior to 9 state-of-the-art polyp segmentation methods. The proposed PFENet will facilitate potential computer-aided diagnosis systems in clinical practice, in which it can better promote medical decision-making than competing methods in polyp detection and removal.
引用
收藏
页码:5792 / 5803
页数:12
相关论文
共 50 条
  • [41] AM-Net: A Network With Attention and Multiscale Feature Fusion for Skin Lesion Segmentation
    Yang, Zhenshuai
    Chen, Rui
    Lin, Chuan
    IEEE SENSORS JOURNAL, 2025, 25 (05) : 8645 - 8655
  • [42] Low-Level Feature Enhancement Network for Semantic Segmentation of Buildings
    Wan, Zhechun
    Zhang, Qian
    Zhang, Guixu
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [43] FTNet: Feature Transverse Network for Thermal Image Semantic Segmentation
    Panetta, Karen
    Kamath, K. M. Shreyas
    Rajeev, Srijith
    Agaian, Sos S.
    IEEE ACCESS, 2021, 9 : 145212 - 145227
  • [44] GCAPSeg-Net: An efficient global context-aware network for colorectal polyp segmentation
    Rana, Debaraj
    Pratik, Shreerudra
    Balabantaray, Bunil Kumar
    Peesapati, Rangababu
    Pachori, Ram Bilas
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 100
  • [45] ConvSegNet: Automated Polyp Segmentation From Colonoscopy Using Context Feature Refinement With Multiple Convolutional Kernel Sizes
    Ige, Ayokunle Olalekan
    Tomar, Nikhil Kumar
    Aranuwa, Felix Ola
    Oriola, Oluwafemi
    Akingbesote, Alaba O.
    Noor, Mohd Halim Mohd
    Mazzara, Manuel
    Aribisala, Benjamin Segun
    IEEE ACCESS, 2023, 11 : 16142 - 16155
  • [46] Collaborative Attention Guided Multi-Scale Feature Fusion Network for Medical Image Segmentation
    Xu, Zhenghua
    Tian, Biao
    Liu, Shijie
    Wang, Xiangtao
    Yuan, Di
    Gu, Junhua
    Chen, Junyang
    Lukasiewicz, Thomas
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (02): : 1857 - 1871
  • [47] A survey of deep learning algorithms for colorectal polyp segmentation
    Li, Sheng
    Ren, Yipei
    Yu, Yulin
    Jiang, Qianru
    He, Xiongxiong
    Li, Hongzhang
    NEUROCOMPUTING, 2025, 614
  • [48] Edge Feature Enhancement for Fine-Grained Segmentation of Remote Sensing Images
    Chen, Zhenxiang
    Xu, Tingfa
    Pan, Yongzhuo
    Shen, Ning
    Chen, Huan
    Li, Jianan
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [49] Multi-Modality Reconstruction Attention and Difference Enhancement Network for Brain MRI Image Segmentation
    Zhang, Xiangfen
    Liu, Yan
    Zhang, Qingyi
    Yuan, Feiniu
    IEEE ACCESS, 2022, 10 : 31058 - 31069
  • [50] PolypSeg plus : A Lightweight Context-Aware Network for Real-Time Polyp Segmentation
    Wu, Huisi
    Zhao, Zebin
    Zhong, Jiafu
    Wang, Wei
    Wen, Zhenkun
    Qin, Jing
    IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (04) : 2610 - 2621