DIFLF: A domain-invariant features learning framework for single-source domain generalization in mammogram classification

被引:0
|
作者
Xie, Wanfang [1 ,2 ]
Liu, Zhenyu [4 ,5 ]
Zhao, Litao [1 ,2 ]
Wang, Meiyun [6 ,7 ]
Tian, Jie [1 ,2 ]
Liu, Jiangang [1 ,2 ,3 ]
机构
[1] Beihang Univ, Sch Engn Med, Beijing 100191, Peoples R China
[2] Beihang Univ, Key Lab Big Data Based Precis Med, Minist Ind & Informat Technol Peoples Republ China, Beijing 100191, Peoples R China
[3] Beijing Engn Res Ctr Cardiovasc Wisdom Diag & Trea, Beijing 100029, Peoples R China
[4] Inst Automat, CAS Key Lab Mol Imaging, Beijing 100190, Peoples R China
[5] Univ Chinese Acad Sci, Beijing 100080, Peoples R China
[6] Zhengzhou Univ, Henan Prov Peoples Hosp, Dept Med Imaging, Zhengzhou 450003, Peoples R China
[7] Zhengzhou Univ, Peoples Hosp, Zhengzhou 450003, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain generalization; Deep learning; Breast cancer; Mammogram; Style-augmentation module; Content-style disentanglement module; BREAST-CANCER;
D O I
10.1016/j.cmpb.2025.108592
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and Objective: Single-source domain generalization (SSDG) aims to generalize a deep learning (DL) model trained on one source dataset to multiple unseen datasets. This is important for the clinical applications of DL-based models to breast cancer screening, wherein a DL-based model is commonly developed in an institute and then tested in other institutes. One challenge of SSDG is to alleviate the domain shifts using only one domain dataset. Methods: The present study proposed a domain-invariant features learning framework (DIFLF) for single-source domain. Specifically, a style-augmentation module (SAM) and a content-style disentanglement module (CSDM) are proposed in DIFLF. SAM includes two different color jitter transforms, which transforms each mammogram in the source domain into two synthesized mammograms with new styles. Thus, it can greatly increase the feature diversity of the source domain, reducing the overfitting of the trained model. CSDM includes three feature disentanglement units, which extracts domain-invariant content (DIC) features by disentangling them from domain-specific style (DSS) features, reducing the influence of the domain shifts resulting from different feature distributions. Our code is available for open access on Github (https://github.com/85675/DIFLF). Results: DIFLF is trained in a private dataset (PRI1), and tested first in another private dataset (PRI2) with similar feature distribution to PRI1 and then tested in two public datasets (INbreast and MIAS) with greatly different feature distributions from PRI1. As revealed by the experiment results, DIFLF presents excellent performance for classifying mammograms in the unseen target datasets of PRI2, INbreast, and MIAS. The accuracy and AUC of DIFLF are 0.917 and 0.928 in PRI2, 0.882 and 0.893 in INbreast, 0.767 and 0.710 in MIAS, respectively. Conclusions: DIFLF can alleviate the influence of domain shifts only using one source dataset. Moreover, DIFLF can achieve an excellent mammogram classification performance even in the unseen datasets with great feature distribution differences from the training dataset.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] SADGFeat: Learning local features with layer spatial attention and domain generalization
    Bai, Wenjing
    Zhang, Yunzhou
    Wang, Li
    Liu, Wei
    Hu, Jun
    Huang, Guan
    IMAGE AND VISION COMPUTING, 2024, 146
  • [42] Collaborative learning with normalization augmentation for domain generalization in time series classification
    He, Qi-Qiao
    Gong, Xueyuan
    Si, Yain-Whar
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01)
  • [43] Joint-product representation learning for domain generalization in classification and regression
    Sentao Chen
    Liang Chen
    Neural Computing and Applications, 2023, 35 : 16509 - 16526
  • [44] Causality-Based Contrastive Incremental Learning Framework for Domain Generalization
    Wang, Xin
    Zhao, Qingjie
    Wang, Lei
    Liu, Wangwang
    TSINGHUA SCIENCE AND TECHNOLOGY, 2025, 30 (04): : 1636 - 1647
  • [45] ADVERSARIAL LEARNING OF RAW SPEECH FEATURES FOR DOMAIN INVARIANT SPEECH RECOGNITION
    Tripathi, Aditay
    Mohan, Aanchan
    Anand, Saket
    Singh, Maneesh
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5959 - 5963
  • [46] MULTI-SOURCE DOMAIN GENERALIZATION FOR ECG-BASED COGNITIVE LOAD ESTIMATION: ADVERSARIAL INVARIANT AND PLAUSIBLE UNCERTAINTY LEARNING
    Wang, Jiyao
    Wang, Ange
    Hu, Haolong
    Wu, Kaishun
    He, Dengbo
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 1631 - 1635
  • [47] Invariant semantic domain generalization shuffle network for cross-scene hyperspectral image classification
    Gao, Jingpeng
    Ji, Xiangyu
    Ye, Fang
    Chen, Geng
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 273
  • [48] Source domain prior-assisted segment anything model for single domain generalization in medical image segmentation
    Dong, Wenhui
    Du, Bo
    Xu, Yongchao
    IMAGE AND VISION COMPUTING, 2024, 150
  • [49] Enhanced dynamic feature representation learning framework by Fourier transform for domain generalization
    Wang, Xin
    Zhao, Qingjie
    Zhang, Changchun
    Wang, Binglu
    Wang, Lei
    Liu, Wangwang
    INFORMATION SCIENCES, 2023, 649
  • [50] Progressive Sub-Domain Information Mining for Single-Source Generalizable Gait Recognition
    Wang, Yang
    Huang, Yan
    Shan, Caifeng
    Wang, Liang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4787 - 4799