DIFLF: A domain-invariant features learning framework for single-source domain generalization in mammogram classification

被引:0
|
作者
Xie, Wanfang [1 ,2 ]
Liu, Zhenyu [4 ,5 ]
Zhao, Litao [1 ,2 ]
Wang, Meiyun [6 ,7 ]
Tian, Jie [1 ,2 ]
Liu, Jiangang [1 ,2 ,3 ]
机构
[1] Beihang Univ, Sch Engn Med, Beijing 100191, Peoples R China
[2] Beihang Univ, Key Lab Big Data Based Precis Med, Minist Ind & Informat Technol Peoples Republ China, Beijing 100191, Peoples R China
[3] Beijing Engn Res Ctr Cardiovasc Wisdom Diag & Trea, Beijing 100029, Peoples R China
[4] Inst Automat, CAS Key Lab Mol Imaging, Beijing 100190, Peoples R China
[5] Univ Chinese Acad Sci, Beijing 100080, Peoples R China
[6] Zhengzhou Univ, Henan Prov Peoples Hosp, Dept Med Imaging, Zhengzhou 450003, Peoples R China
[7] Zhengzhou Univ, Peoples Hosp, Zhengzhou 450003, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain generalization; Deep learning; Breast cancer; Mammogram; Style-augmentation module; Content-style disentanglement module; BREAST-CANCER;
D O I
10.1016/j.cmpb.2025.108592
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and Objective: Single-source domain generalization (SSDG) aims to generalize a deep learning (DL) model trained on one source dataset to multiple unseen datasets. This is important for the clinical applications of DL-based models to breast cancer screening, wherein a DL-based model is commonly developed in an institute and then tested in other institutes. One challenge of SSDG is to alleviate the domain shifts using only one domain dataset. Methods: The present study proposed a domain-invariant features learning framework (DIFLF) for single-source domain. Specifically, a style-augmentation module (SAM) and a content-style disentanglement module (CSDM) are proposed in DIFLF. SAM includes two different color jitter transforms, which transforms each mammogram in the source domain into two synthesized mammograms with new styles. Thus, it can greatly increase the feature diversity of the source domain, reducing the overfitting of the trained model. CSDM includes three feature disentanglement units, which extracts domain-invariant content (DIC) features by disentangling them from domain-specific style (DSS) features, reducing the influence of the domain shifts resulting from different feature distributions. Our code is available for open access on Github (https://github.com/85675/DIFLF). Results: DIFLF is trained in a private dataset (PRI1), and tested first in another private dataset (PRI2) with similar feature distribution to PRI1 and then tested in two public datasets (INbreast and MIAS) with greatly different feature distributions from PRI1. As revealed by the experiment results, DIFLF presents excellent performance for classifying mammograms in the unseen target datasets of PRI2, INbreast, and MIAS. The accuracy and AUC of DIFLF are 0.917 and 0.928 in PRI2, 0.882 and 0.893 in INbreast, 0.767 and 0.710 in MIAS, respectively. Conclusions: DIFLF can alleviate the influence of domain shifts only using one source dataset. Moreover, DIFLF can achieve an excellent mammogram classification performance even in the unseen datasets with great feature distribution differences from the training dataset.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] ContrastSense: Domain-invariant Contrastive Learning for In-the-Wild Wearable Sensing
    Dai, Gaole
    Xu, Huatao
    Yoon, Hyungun
    Li, Mo
    Tan, Rui
    Lee, Sung-Ju
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2024, 8 (04):
  • [22] Learning Domain-Invariant Model for WiFi-Based Indoor Localization
    Wang, Guanzhong
    Zhang, Dongheng
    Zhang, Tianyu
    Yang, Shuai
    Sun, Qibin
    Chen, Yan
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 13898 - 13913
  • [23] Meta-learning the invariant representation for domain generalization
    Jia, Chen
    Zhang, Yue
    MACHINE LEARNING, 2024, 113 (04) : 1661 - 1681
  • [24] Meta-learning the invariant representation for domain generalization
    Chen Jia
    Yue Zhang
    Machine Learning, 2024, 113 : 1661 - 1681
  • [25] Single-Source Domain Expansion Network for Cross-Scene Hyperspectral Image Classification
    Zhang, Yuxiang
    Li, Wei
    Sun, Weidong
    Tao, Ran
    Du, Qian
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 1498 - 1512
  • [26] On learning deep domain-invariant features from 2D synthetic images for industrial visual inspection
    Abubakr, Abdelrahman G.
    Jovancevic, Igor
    Mokhtari, Nour Islam
    Ben Abdallah, Hamdi
    Orteu, Jean-Jose
    FIFTEENTH INTERNATIONAL CONFERENCE ON QUALITY CONTROL BY ARTIFICIAL VISION, 2021, 11794
  • [27] Attribute-Aligned Domain-Invariant Feature Learning for Unsupervised Domain Adaptation Person Re-Identification
    Li, Huafeng
    Chen, Yiwen
    Tao, Dapeng
    Yu, Zhengtao
    Qi, Guanqiu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 1480 - 1494
  • [28] Adversarial Invariant Feature Learning with Accuracy Constraint for Domain Generalization
    Akuzawa, Kei
    Iwasawa, Yusuke
    Matsuo, Yutaka
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 11907 : 315 - 331
  • [29] Contrastive domain-invariant generalization for remaining useful life prediction under diverse conditions and fault modes
    Xiao, Xiaoqi
    Zhang, Jianguo
    Xu, Dan
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2025, 253
  • [30] Research on the improvement of domain generalization by the fusion of invariant features and sharpness-aware minimization
    Yang, Yixuan
    Dong, Mingrong
    Zeng, Kai
    Shen, Tao
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01)