Triple-Supervised Convolutional Transformer Aggregation for Robust Monocular Endoscopic Dense Depth Estimation

被引:1
作者
Fan, Wenkang [1 ]
Jiang, Wenjing [1 ]
Shi, Hong [2 ]
Zeng, Hui-Qing [3 ]
Chen, Yinran [1 ]
Luo, Xiongbiao [1 ]
机构
[1] Xiamen Univ, Natl Inst Data Sci Hlth & Med, Dept Comp Sci & Technol, Xiamen 361005, Peoples R China
[2] Fujian Med Univ, Canc Hosp, Fuzhou 350014, Peoples R China
[3] Xiamen Univ, Zhongshan Hosp, Xiamen 361004, Peoples R China
来源
IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS | 2024年 / 6卷 / 03期
基金
中国国家自然科学基金;
关键词
Feature extraction; Transformers; Estimation; Convolution; Convolutional codes; Lighting; Unsupervised learning; Monocular depth estimation; vision transformers; self-supervised learning; robotic-assisted endoscopy;
D O I
10.1109/TMRB.2024.3407384
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Accurate deeply learned dense depth prediction remains a challenge to monocular vision reconstruction. Compared to monocular depth estimation from natural images, endoscopic dense depth prediction is even more challenging. While it is difficult to annotate endoscopic video data for supervised learning, endoscopic video images certainly suffer from illumination variations (limited lighting source, limited field of viewing, and specular highlight), smooth and textureless surfaces in surgical complex fields. This work explores a new deep learning framework of triple-supervised convolutional transformer aggregation (TSCTA) for monocular endoscopic dense depth recovery without annotating any data. Specifically, TSCTA creates convolutional transformer aggregation networks with a new hybrid encoder that combines dense convolution and scalable transformers to parallel extract local texture features and global spatial-temporal features, while it builds a local and global aggregation decoder to effectively aggregate global features and local features from coarse to fine. Moreover, we develop a self-supervised learning framework with triple supervision, which integrates minimum photometric consistency and depth consistency with sparse depth self-supervision to train our model by unannotated data. We evaluated TSCTA on unannotated monocular endoscopic images collected from various surgical procedures, with the experimental results showing that our methods can achieve more accurate depth range, more complete depth distribution, more sufficient textures, better qualitative and quantitative assessment results than state-of-the-art deeply learned monocular dense depth estimation methods.
引用
收藏
页码:1017 / 1029
页数:13
相关论文
共 45 条
  • [1] Bae J, 2023, AAAI CONF ARTIF INTE, P187
  • [2] Rethinking Local and Global Feature Representation for Dense Prediction
    Chen, Mohan
    Zhang, Li
    Feng, Rui
    Xue, Xiangyang
    Feng, Jianfeng
    [J]. PATTERN RECOGNITION, 2023, 135
  • [3] Mobile-Former: Bridging MobileNet and Transformer
    Chen, Yinpeng
    Dai, Xiyang
    Chen, Dongdong
    Liu, Mengchen
    Dong, Xiaoyi
    Yuan, Lu
    Liu, Zicheng
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5260 - 5269
  • [4] Subspace-PnP: A Geometric Constraint Loss for Mutual Assistance of Depth and Optical Flow Estimation
    Chi, Cheng
    Hao, Tianyu
    Wang, Qingjie
    Guo, Peng
    Yang, Xin
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (12) : 3054 - 3069
  • [5] HSfM: Hybrid Structure-from-Motion
    Cui, Hainan
    Gao, Xiang
    Shen, Shuhan
    Hu, Zhanyi
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2393 - 2402
  • [6] Dosovitskiy A., 2021, P INT C LEARN REPR I
  • [7] Direct Sparse Odometry
    Engel, Jakob
    Koltun, Vladlen
    Cremers, Daniel
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) : 611 - 625
  • [8] Fan W., 2023, ICASSP 2023 2023 IEE, P1
  • [9] Farhan E, 2017, IMAGE PROCESS ON LIN, V7, P386, DOI 10.5201/ipol.2017.154
  • [10] Digging Into Self-Supervised Monocular Depth Estimation
    Godard, Clement
    Mac Aodha, Oisin
    Firman, Michael
    Brostow, Gabriel
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3827 - 3837