DIFLF: A domain-invariant features learning framework for single-source domain generalization in mammogram classification

被引:0
|
作者
Xie, Wanfang [1 ,2 ]
Liu, Zhenyu [4 ,5 ]
Zhao, Litao [1 ,2 ]
Wang, Meiyun [6 ,7 ]
Tian, Jie [1 ,2 ]
Liu, Jiangang [1 ,2 ,3 ]
机构
[1] Beihang Univ, Sch Engn Med, Beijing 100191, Peoples R China
[2] Beihang Univ, Key Lab Big Data Based Precis Med, Minist Ind & Informat Technol Peoples Republ China, Beijing 100191, Peoples R China
[3] Beijing Engn Res Ctr Cardiovasc Wisdom Diag & Trea, Beijing 100029, Peoples R China
[4] Inst Automat, CAS Key Lab Mol Imaging, Beijing 100190, Peoples R China
[5] Univ Chinese Acad Sci, Beijing 100080, Peoples R China
[6] Zhengzhou Univ, Henan Prov Peoples Hosp, Dept Med Imaging, Zhengzhou 450003, Peoples R China
[7] Zhengzhou Univ, Peoples Hosp, Zhengzhou 450003, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain generalization; Deep learning; Breast cancer; Mammogram; Style-augmentation module; Content-style disentanglement module; BREAST-CANCER;
D O I
10.1016/j.cmpb.2025.108592
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and Objective: Single-source domain generalization (SSDG) aims to generalize a deep learning (DL) model trained on one source dataset to multiple unseen datasets. This is important for the clinical applications of DL-based models to breast cancer screening, wherein a DL-based model is commonly developed in an institute and then tested in other institutes. One challenge of SSDG is to alleviate the domain shifts using only one domain dataset. Methods: The present study proposed a domain-invariant features learning framework (DIFLF) for single-source domain. Specifically, a style-augmentation module (SAM) and a content-style disentanglement module (CSDM) are proposed in DIFLF. SAM includes two different color jitter transforms, which transforms each mammogram in the source domain into two synthesized mammograms with new styles. Thus, it can greatly increase the feature diversity of the source domain, reducing the overfitting of the trained model. CSDM includes three feature disentanglement units, which extracts domain-invariant content (DIC) features by disentangling them from domain-specific style (DSS) features, reducing the influence of the domain shifts resulting from different feature distributions. Our code is available for open access on Github (https://github.com/85675/DIFLF). Results: DIFLF is trained in a private dataset (PRI1), and tested first in another private dataset (PRI2) with similar feature distribution to PRI1 and then tested in two public datasets (INbreast and MIAS) with greatly different feature distributions from PRI1. As revealed by the experiment results, DIFLF presents excellent performance for classifying mammograms in the unseen target datasets of PRI2, INbreast, and MIAS. The accuracy and AUC of DIFLF are 0.917 and 0.928 in PRI2, 0.882 and 0.893 in INbreast, 0.767 and 0.710 in MIAS, respectively. Conclusions: DIFLF can alleviate the influence of domain shifts only using one source dataset. Moreover, DIFLF can achieve an excellent mammogram classification performance even in the unseen datasets with great feature distribution differences from the training dataset.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Two-Stage Domain Alignment Single-Source Domain Generalization Network for Cross-Scene Hyperspectral Images Classification
    Wang, Xiaozhen
    Liu, Jiahang
    Ni, Yue
    Chi, Weijian
    Fu, Yangyu
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [32] A Bit More Bayesian: Domain-Invariant Learning with Uncertainty
    Xiao, Zehao
    Shen, Jiayi
    Zhen, Xiantong
    Shao, Ling
    Snoek, Cees G. M.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [33] On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources
    Trung Phung
    Trung Le
    Long Vuong
    Toan Tran
    Anh Tran
    Bui, Hung
    Dinh Phung
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [34] Improving Generalization of Multi-agent Reinforcement Learning Through Domain-Invariant Feature Extraction
    Xu, Yifan
    Pu, Zhiqiang
    Cai, Qiang
    Li, Feimo
    Chai, Xinghua
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI, 2023, 14259 : 49 - 62
  • [35] Estimating Generalization under Distribution Shifts via Domain-Invariant Representations
    Chuang, Ching-Yao
    Torralba, Antonio
    Jegelka, Stefanie
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [36] Estimating Generalization under Distribution Shifts via Domain-Invariant Representations
    Chuang, Ching-Yao
    Torralba, Antonio
    Jegelka, Stefanie
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [37] Deep domain-invariant learning for facial age estimation
    Bao, Zenghao
    Luo, Yutian
    Tan, Zichang
    Wan, Jun
    Ma, Xibo
    Lei, Zhen
    NEUROCOMPUTING, 2023, 534 : 86 - 93
  • [38] Adversarial Learning Domain-Invariant Conditional Features for Robust Face Anti-spoofing
    Jiang, Fangling
    Li, Qi
    Liu, Pengcheng
    Zhou, Xiang-Dong
    Sun, Zhenan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023, 131 (07) : 1680 - 1703
  • [39] A Dictionary Approach to Domain-Invariant Learning in Deep Networks
    Wang, Ze
    Cheng, Xiuyuan
    Sapiro, Guillermo
    Qiu, Qiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [40] Adversarial Learning Domain-Invariant Conditional Features for Robust Face Anti-spoofing
    Fangling Jiang
    Qi Li
    Pengcheng Liu
    Xiang-Dong Zhou
    Zhenan Sun
    International Journal of Computer Vision, 2023, 131 : 1680 - 1703