A self-supervised learning approach for registration agnostic imaging models with 3D brain CTA

被引:3
|
作者
Dong, Yingjun [1 ]
Pachade, Samiksha [1 ]
Liang, Xiaomin [1 ]
Sheth, Sunil A. [2 ]
Giancardo, Luca [1 ,3 ]
机构
[1] Univ Texas Hlth Sci Ctr Houston, McWilliams Sch Biomed Informat, 7000 Fannin St, Houston, TX 77054 USA
[2] Univ Texas Hlth Sci Ctr Houston, McGovern Med Sch, Dept Neurol, 6431 Fannin St, Houston, TX USA
[3] Univ Texas Hlth Sci Ctr Houston, Inst Stroke & Cerebrovasc Dis, 7000 Fannin St, Houston, TX 77054 USA
关键词
Classification Description Medical imaging; Machine learning; Neuroanatomy;
D O I
10.1016/j.isci.2024.109004
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Deep learning -based neuroimaging pipelines for acute stroke typically rely on image registration, which not only increases computation but also introduces a point of failure. In this paper, we propose a generalpurpose contrastive self -supervised learning method that converts a convolutional deep neural network designed for registered images to work on a different input domain, i.e., with unregistered images. This is accomplished by using a self -supervised strategy that does not rely on labels, where the original model acts as a teacher and a new network as a student. Large vessel occlusion (LVO) detection experiments using computed tomographic angiography (CTA) data from 402 CTA patients show the student model achieving competitive LVO detection performance (area under the receiver operating characteristic curve [AUC] = 0.88 vs. AUC = 0.81) compared to the teacher model, even with unregistered images. The student model trained directly on unregistered images using standard supervised learning achieves an AUC = 0.63, highlighting the proposed method's efficacy in adapting models to different pipelines and domains.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Exploring Self-Supervised Learning for 3D Point Cloud Registration
    Yuan, Mingzhi
    Huang, Qiao
    Shen, Ao
    Huang, Xiaoshui
    Wang, Manning
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (01): : 25 - 31
  • [3] 3D Self-Supervised Methods for Medical Imaging
    Taleb, Aiham
    Loetzsch, Winfried
    Danz, Noel
    Severin, Julius
    Gaertner, Thomas
    Bergner, Benjamin
    Lippert, Christoph
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [4] Self-Supervised Learning on 3D Point Clouds by Learning Discrete Generative Models
    Eckart, Benjamin
    Yuan, Wentao
    Liu, Chao
    Kautz, Jan
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 8244 - 8253
  • [5] Discretization-Agnostic Deep Self-Supervised 3D Surface Parameterization
    Pokhariya, Chandradeep
    Naik, Shanthika
    Srivastava, Astitva
    Sharma, Avinash
    SIGGRAPH ASIA 2022 TECHNICAL COMMUNICATIONS PROCEEDINGS, SIGGRAPH 2022, 2022,
  • [6] Self-supervised learning for accelerated 3D high-resolution ultrasound imaging
    Dai, Xianjin
    Lei, Yang
    Wang, Tonghe
    Axente, Marian
    Xu, Dong
    Patel, Pretesh
    Jani, Ashesh B.
    Curran, Walter J.
    Liu, Tian
    Yang, Xiaofeng
    MEDICAL PHYSICS, 2021, 48 (07) : 3916 - 3926
  • [7] Self-Supervised Learning for Non-Rigid Registration Between Near-Isometric 3D Surfaces in Medical Imaging
    Tang, Jisi
    Zhou, Qing
    Huang, Yuan
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (02) : 519 - 532
  • [8] 3D Human Pose Machines with Self-Supervised Learning
    Wang, Keze
    Lin, Liang
    Jiang, Chenhan
    Qian, Chen
    Wei, Pengxu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (05) : 1069 - 1082
  • [9] Self-Supervised Learning of Detailed 3D Face Reconstruction
    Chen, Yajing
    Wu, Fanzi
    Wang, Zeyu
    Song, Yibing
    Ling, Yonggen
    Bao, Linchao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 8696 - 8705
  • [10] Visual Reinforcement Learning With Self-Supervised 3D Representations
    Ze, Yanjie
    Hansen, Nicklas
    Chen, Yinbo
    Jain, Mohit
    Wang, Xiaolong
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 2890 - 2897