Interpretable Multi-Modal Image Registration Network Based on Disentangled Convolutional Sparse Coding

被引:72
作者
Deng, Xin [1 ]
Liu, Enpeng [2 ]
Li, Shengxi [2 ]
Duan, Yiping [3 ]
Xu, Mai [2 ]
机构
[1] Beihang Univ, Sch Cyber Sci & Technol, Beijing 100191, Peoples R China
[2] Beihang Univ, Sch Elect & Informat Engn, Beijing 100191, Peoples R China
[3] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Image registration; Task analysis; Measurement; Convolutional codes; Strain; Image coding; Generative adversarial networks; Multi-modal image registration; convolutional sparse coding; interpretable network; LEARNING FRAMEWORK;
D O I
10.1109/TIP.2023.3240024
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-modal image registration aims to spatially align two images from different modalities to make their feature points match with each other. Captured by different sensors, the images from different modalities often contain many distinct features, which makes it challenging to find their accurate correspondences. With the success of deep learning, many deep networks have been proposed to align multi-modal images, however, they are mostly lack of interpretability. In this paper, we first model the multi-modal image registration problem as a disentangled convolutional sparse coding (DCSC) model. In this model, the multi-modal features that are responsible for alignment (RA features) are well separated from the features that are not responsible for alignment (nRA features). By only allowing the RA features to participate in the deformation field prediction, we can eliminate the interference of the nRA features to improve the registration accuracy and efficiency. The optimization process of the DCSC model to separate the RA and nRA features is then turned into a deep network, namely Interpretable Multi-modal Image Registration Network (InMIR-Net). To ensure the accurate separation of RA and nRA features, we further design an accompanying guidance network (AG-Net) to supervise the extraction of RA features in InMIR-Net. The advantage of InMIR-Net is that it provides a universal framework to tackle both rigid and non-rigid multi-modal image registration tasks. Extensive experimental results verify the effectiveness of our method on both rigid and non-rigid registrations on various multi-modal image datasets, including RGB/depth images, RGB/near-infrared (NIR) images, RGB/multi-spectral images, T1/T2 weighted magnetic resonance (MR) images and computed tomography (CT)/MR images. The codes are available at https://github.com/lep990816/Interpretable-Multi-modal-Image-Registration.
引用
收藏
页码:1078 / 1091
页数:14
相关论文
共 67 条
[1]  
[Anonymous], 2003, RETROSPECTIVE IMAGE
[2]  
Arar M., 2020, CVPR, P13410
[3]   Voxel-based morphometry - The methods [J].
Ashburner, J ;
Friston, KJ .
NEUROIMAGE, 2000, 11 (06) :805-821
[4]   Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain [J].
Avants, B. B. ;
Epstein, C. L. ;
Grossman, M. ;
Gee, J. C. .
MEDICAL IMAGE ANALYSIS, 2008, 12 (01) :26-41
[5]   A reproducible evaluation of ANTs similarity metric performance in brain image registration [J].
Avants, Brian B. ;
Tustison, Nicholas J. ;
Song, Gang ;
Cook, Philip A. ;
Klein, Arno ;
Gee, James C. .
NEUROIMAGE, 2011, 54 (03) :2033-2044
[6]   VoxelMorph: A Learning Framework for Deformable Medical Image Registration [J].
Balakrishnan, Guha ;
Zhao, Amy ;
Sabuncu, Mert R. ;
Guttag, John ;
Dalca, Adrian, V .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (08) :1788-1800
[7]   MAGSAC: Marginalizing Sample Consensus [J].
Barath, Daniel ;
Matas, Jiri ;
Noskova, Jana .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10189-10197
[8]   Speeded-Up Robust Features (SURF) [J].
Bay, Herbert ;
Ess, Andreas ;
Tuytelaars, Tinne ;
Van Gool, Luc .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2008, 110 (03) :346-359
[9]   LOCAL MULTI-MODAL IMAGE MATCHING BASED ON SELF-SIMILARITY [J].
Bodensteiner, C. ;
Huebner, W. ;
Juengling, K. ;
Mueller, J. ;
Arens, M. .
2010 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2010, :937-940
[10]  
Brown M, 2011, PROC CVPR IEEE, P177, DOI 10.1109/CVPR.2011.5995637