DIFLF: A domain-invariant features learning framework for single-source domain generalization in mammogram classification

被引:0
|
作者
Xie, Wanfang [1 ,2 ]
Liu, Zhenyu [4 ,5 ]
Zhao, Litao [1 ,2 ]
Wang, Meiyun [6 ,7 ]
Tian, Jie [1 ,2 ]
Liu, Jiangang [1 ,2 ,3 ]
机构
[1] Beihang Univ, Sch Engn Med, Beijing 100191, Peoples R China
[2] Beihang Univ, Key Lab Big Data Based Precis Med, Minist Ind & Informat Technol Peoples Republ China, Beijing 100191, Peoples R China
[3] Beijing Engn Res Ctr Cardiovasc Wisdom Diag & Trea, Beijing 100029, Peoples R China
[4] Inst Automat, CAS Key Lab Mol Imaging, Beijing 100190, Peoples R China
[5] Univ Chinese Acad Sci, Beijing 100080, Peoples R China
[6] Zhengzhou Univ, Henan Prov Peoples Hosp, Dept Med Imaging, Zhengzhou 450003, Peoples R China
[7] Zhengzhou Univ, Peoples Hosp, Zhengzhou 450003, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain generalization; Deep learning; Breast cancer; Mammogram; Style-augmentation module; Content-style disentanglement module; BREAST-CANCER;
D O I
10.1016/j.cmpb.2025.108592
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and Objective: Single-source domain generalization (SSDG) aims to generalize a deep learning (DL) model trained on one source dataset to multiple unseen datasets. This is important for the clinical applications of DL-based models to breast cancer screening, wherein a DL-based model is commonly developed in an institute and then tested in other institutes. One challenge of SSDG is to alleviate the domain shifts using only one domain dataset. Methods: The present study proposed a domain-invariant features learning framework (DIFLF) for single-source domain. Specifically, a style-augmentation module (SAM) and a content-style disentanglement module (CSDM) are proposed in DIFLF. SAM includes two different color jitter transforms, which transforms each mammogram in the source domain into two synthesized mammograms with new styles. Thus, it can greatly increase the feature diversity of the source domain, reducing the overfitting of the trained model. CSDM includes three feature disentanglement units, which extracts domain-invariant content (DIC) features by disentangling them from domain-specific style (DSS) features, reducing the influence of the domain shifts resulting from different feature distributions. Our code is available for open access on Github (https://github.com/85675/DIFLF). Results: DIFLF is trained in a private dataset (PRI1), and tested first in another private dataset (PRI2) with similar feature distribution to PRI1 and then tested in two public datasets (INbreast and MIAS) with greatly different feature distributions from PRI1. As revealed by the experiment results, DIFLF presents excellent performance for classifying mammograms in the unseen target datasets of PRI2, INbreast, and MIAS. The accuracy and AUC of DIFLF are 0.917 and 0.928 in PRI2, 0.882 and 0.893 in INbreast, 0.767 and 0.710 in MIAS, respectively. Conclusions: DIFLF can alleviate the influence of domain shifts only using one source dataset. Moreover, DIFLF can achieve an excellent mammogram classification performance even in the unseen datasets with great feature distribution differences from the training dataset.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Learning emotion-discriminative and domain-invariant features for domain adaptation in speech emotion recognition
    Mao, Qirong
    Xu, Guopeng
    Xue, Wentao
    Gou, Jianping
    Zhan, Yongzhao
    SPEECH COMMUNICATION, 2017, 93 : 1 - 10
  • [22] Learning Domain-Invariant and Discriminative Features for Homogeneous Unsupervised Domain AdaptationInspec keywordsOther keywordsKey words
    Zhang, Yun
    Wang, Nianbin
    Cai, Shaobin
    CHINESE JOURNAL OF ELECTRONICS, 2020, 29 (06) : 1119 - 1125
  • [23] LEARNING DOMAIN-INVARIANT TRANSFORMATION FOR SPEAKER VERIFICATION
    Zhang, Hanyi
    Wang, Longbiao
    Lee, Kong Aik
    Liu, Meng
    Dang, Jianwu
    Chen, Hui
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7177 - 7181
  • [24] Gradient-aware domain-invariant learning for domain generalizationGradient-Aware Domain-Invariant Learning for Domain GeneralizationF. Hou et al.
    Feng Hou
    Yao Zhang
    Yang Liu
    Jin Yuan
    Cheng Zhong
    Yang Zhang
    Zhongchao Shi
    Jianping Fan
    Zhiqiang He
    Multimedia Systems, 2025, 31 (1)
  • [25] Domain-Invariant Feature Distillation for Cross-Domain Sentiment Classification
    Hu, Mengting
    Wu, Yike
    Zhao, Shiwan
    Guo, Honglei
    Cheng, Renhong
    Su, Zhong
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 5559 - 5568
  • [26] Learning Domain-Invariant Representations of Histological Images
    Lafarge, Maxime W.
    Pluim, Josien P. W.
    Eppenhof, Koen A. J.
    Veta, Mitko
    FRONTIERS IN MEDICINE, 2019, 6
  • [27] Domain-Invariant Few-Shot Contrastive Learning for Hyperspectral Image Classification
    Chen, Wenchen
    Zhang, Yanmei
    Chu, Jianping
    Wang, Xingbo
    Applied Sciences (Switzerland), 2024, 14 (23):
  • [28] Domain Generalization for Time-Series Forecasting via Extended Domain-Invariant Representations
    Shi, Yunchuan
    Li, Wei
    Zomaya, Albert Y.
    2024 IEEE ANNUAL CONGRESS ON ARTIFICIAL INTELLIGENCE OF THING, AIOT 2024, 2024, : 110 - 116
  • [29] Adversarial Domain-Invariant Generalization: A Generic Domain-Regressive Framework for Bearing Fault Diagnosis Under Unseen Conditions
    Chen, Liang
    Li, Qi
    Shen, Changqing
    Zhu, Jun
    Wang, Dong
    Xia, Min
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (03) : 1790 - 1800
  • [30] Source-Free Domain-Invariant Performance Prediction
    Khramtsova, Ekaterina
    Baktashmotlagh, Mahsa
    Zuccon, Guido
    Wang, Xi
    Salzmann, Mathieu
    COMPUTER VISION - ECCV 2024, PT LXXX, 2025, 15138 : 99 - 116