DIFLF: A domain-invariant features learning framework for single-source domain generalization in mammogram classification

被引:0
|
作者
Xie, Wanfang [1 ,2 ]
Liu, Zhenyu [4 ,5 ]
Zhao, Litao [1 ,2 ]
Wang, Meiyun [6 ,7 ]
Tian, Jie [1 ,2 ]
Liu, Jiangang [1 ,2 ,3 ]
机构
[1] Beihang Univ, Sch Engn Med, Beijing 100191, Peoples R China
[2] Beihang Univ, Key Lab Big Data Based Precis Med, Minist Ind & Informat Technol Peoples Republ China, Beijing 100191, Peoples R China
[3] Beijing Engn Res Ctr Cardiovasc Wisdom Diag & Trea, Beijing 100029, Peoples R China
[4] Inst Automat, CAS Key Lab Mol Imaging, Beijing 100190, Peoples R China
[5] Univ Chinese Acad Sci, Beijing 100080, Peoples R China
[6] Zhengzhou Univ, Henan Prov Peoples Hosp, Dept Med Imaging, Zhengzhou 450003, Peoples R China
[7] Zhengzhou Univ, Peoples Hosp, Zhengzhou 450003, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain generalization; Deep learning; Breast cancer; Mammogram; Style-augmentation module; Content-style disentanglement module; BREAST-CANCER;
D O I
10.1016/j.cmpb.2025.108592
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and Objective: Single-source domain generalization (SSDG) aims to generalize a deep learning (DL) model trained on one source dataset to multiple unseen datasets. This is important for the clinical applications of DL-based models to breast cancer screening, wherein a DL-based model is commonly developed in an institute and then tested in other institutes. One challenge of SSDG is to alleviate the domain shifts using only one domain dataset. Methods: The present study proposed a domain-invariant features learning framework (DIFLF) for single-source domain. Specifically, a style-augmentation module (SAM) and a content-style disentanglement module (CSDM) are proposed in DIFLF. SAM includes two different color jitter transforms, which transforms each mammogram in the source domain into two synthesized mammograms with new styles. Thus, it can greatly increase the feature diversity of the source domain, reducing the overfitting of the trained model. CSDM includes three feature disentanglement units, which extracts domain-invariant content (DIC) features by disentangling them from domain-specific style (DSS) features, reducing the influence of the domain shifts resulting from different feature distributions. Our code is available for open access on Github (https://github.com/85675/DIFLF). Results: DIFLF is trained in a private dataset (PRI1), and tested first in another private dataset (PRI2) with similar feature distribution to PRI1 and then tested in two public datasets (INbreast and MIAS) with greatly different feature distributions from PRI1. As revealed by the experiment results, DIFLF presents excellent performance for classifying mammograms in the unseen target datasets of PRI2, INbreast, and MIAS. The accuracy and AUC of DIFLF are 0.917 and 0.928 in PRI2, 0.882 and 0.893 in INbreast, 0.767 and 0.710 in MIAS, respectively. Conclusions: DIFLF can alleviate the influence of domain shifts only using one source dataset. Moreover, DIFLF can achieve an excellent mammogram classification performance even in the unseen datasets with great feature distribution differences from the training dataset.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Gradient-aware domain-invariant learning for domain generalization
    Hou, Feng
    Zhang, Yao
    Liu, Yang
    Yuan, Jin
    Zhong, Cheng
    Zhang, Yang
    Shi, Zhongchao
    Fan, Jianping
    He, Zhiqiang
    MULTIMEDIA SYSTEMS, 2025, 31 (01)
  • [2] Learning Domain-Invariant Representations from Text for Domain Generalization
    Zhang, Huihuang
    Hu, Haigen
    Chen, Qi
    Zhou, Qianwei
    Jiang, Mingfeng
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 118 - 129
  • [3] Single-Source Cross-Domain Bearing Fault Diagnosis via Multipseudo-Domain-Augmented Adversarial Domain-Invariant Learning
    Bi, Yuanguo
    Fu, Rao
    Jiang, Cunyu
    Han, Guangjie
    Yin, Zhenyu
    Zhao, Liang
    Li, Qihao
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (19): : 31521 - 31533
  • [4] Knowledge Distillation-Based Domain-Invariant Representation Learning for Domain Generalization
    Niu, Ziwei
    Yuan, Junkun
    Ma, Xu
    Xu, Yingying
    Liu, Jing
    Chen, Yen-Wei
    Tong, Ruofeng
    Lin, Lanfen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 245 - 255
  • [5] Learning Domain-Invariant and Discriminative Features for Homogeneous Unsupervised Domain Adaptation
    ZHANG Yun
    WANG Nianbin
    CAI Shaobin
    ChineseJournalofElectronics, 2020, 29 (06) : 1119 - 1125
  • [6] Learning Domain-Invariant Subspace Using Domain Features and Independence Maximization
    Yan, Ke
    Kou, Lu
    Zhang, David
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (01) : 288 - 299
  • [7] Domain-invariant information aggregation for domain generalization semantic segmentation
    Liao, Muxin
    Tian, Shishun
    Zhang, Yuhang
    Hua, Guoguang
    Zou, Wenbin
    Li, Xia
    NEUROCOMPUTING, 2023, 546
  • [8] Domain-Invariant Feature Learning for Domain Adaptation
    Tu, Ching-Ting
    Lin, Hsiau-Wen
    Lin, Hwei Jen
    Tokuyama, Yoshimasa
    Chu, Chia-Hung
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2023, 37 (03)
  • [9] DALSCLIP: Domain aggregation via learning stronger domain-invariant features for CLIP
    Zhang, Yuewen
    Wang, Jiuhang
    Tang, Hongying
    Qin, Ronghua
    IMAGE AND VISION COMPUTING, 2025, 154
  • [10] Learning Domain-Invariant Discriminative Features for Heterogeneous Face Recognition
    Yang, Shanmin
    Fu, Keren
    Yang, Xiao
    Lin, Ye
    Zhang, Jianwei
    Peng, Cheng
    IEEE ACCESS, 2020, 8 : 209790 - 209801