Chest radiology report generation based on cross-modal multi-scale feature fusion

被引:4
|
作者
Pan, Yu [1 ]
Liu, Li -Jun [1 ,2 ,3 ]
Yang, Xiao-Bing [1 ]
Peng, Wei [1 ]
Huang, Qing-Song [1 ]
机构
[1] Kunming Univ Sci & Technol, Sch Informat Engn & Automat, Kunming, Peoples R China
[2] Yunnan Key Lab Comp Technol Applicat, Kunming, Peoples R China
[3] Kunming Univ Sci & Technol, Sch Informat Engn & Automat, Wujiaying St, Kunming, Yunnan, Peoples R China
基金
中国国家自然科学基金;
关键词
Report generation; Cross; -modal; Multi; -scale; Medical image; Attention mechanism; Deep learning;
D O I
10.1016/j.jrras.2024.100823
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Chest radiology imaging plays a crucial role in the early screening, diagnosis, and treatment of chest diseases. The accurate interpretation of radiological images and the automatic generation of radiology reports not only save the doctor's time but also mitigate the risk of errors in diagnosis. The core objective of automatic radiology report generation is to achieve precise mapping of visual features and lesion descriptions at multi-scale and finegrained levels. Existing methods typically combine global visual features and textual features to generate radiology reports. However, these approaches may ignore the key lesion areas and lack sensitivity to crucial lesion location information. Furthermore, achieving multi-scale characterization and fine-grained alignment of medical visual features and report text features proves challenging, leading to a reduction in the quality of radiology report generation. Addressing these issues, we propose a method for chest radiology report generation based on cross-modal multi-scale feature fusion. First, an auxiliary labeling module is designed to guide the model to focus on the lesion region of the radiological image. Second, a channel attention network is employed to enhance the characterization of location information and disease features. Finally, a cross-modal features fusion module is constructed by combining memory matrices, facilitating fine-grained alignment between multi-scale visual features and reporting text features on corresponding scales. The proposed method is experimentally evaluated on two publicly available radiological image datasets. The results demonstrate superior performance based on BLEU and ROUGE metrics compared to existing methods. Particularly, there are improvements of 4.8% in the ROUGE metric and 9.4% in the METEOR metric on the IU X-Ray dataset. Moreover, there is a 7.4% enhancement in BLEU-1 and a 7.6% improvement in the BLEU-2 on the MIMIC-CXR dataset.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Depth map reconstruction method based on multi-scale cross-modal feature fusion
    Yang J.
    Xie T.
    Yue H.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2023, 51 (03): : 52 - 59
  • [2] Semantics-preserving hashing based on multi-scale fusion for cross-modal retrieval
    Zhang, Hong
    Pan, Min
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (11) : 17299 - 17314
  • [3] Semantics-preserving hashing based on multi-scale fusion for cross-modal retrieval
    Hong Zhang
    Min Pan
    Multimedia Tools and Applications, 2021, 80 : 17299 - 17314
  • [4] Cross-modal Deep Learning-based Clinical Recommendation System for Radiology Report Generation from Chest X-rays
    Shetty S.
    Ananthanarayana V.S.
    Mahale A.
    International Journal of Engineering, Transactions B: Applications, 2023, 36 (08): : 1569 - 1577
  • [5] Cross-modal Deep Learning-based Clinical Recommendation System for Radiology Report Generation from Chest X-rays
    Shetty, S.
    Ananthanarayana, V. S.
    Mahale, A.
    INTERNATIONAL JOURNAL OF ENGINEERING, 2023, 36 (08): : 1569 - 1577
  • [6] RGB-D Saliency Detection Based on Attention Mechanism and Multi-Scale Cross-Modal Fusion
    Cui Z.
    Feng Z.
    Wang F.
    Liu Q.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2023, 35 (06): : 893 - 902
  • [7] Cross-modal pedestrian re-identification technique based on multi-scale feature attention and strategy balancing
    Lai, Yiqiang
    ENGINEERING RESEARCH EXPRESS, 2025, 7 (01):
  • [8] Knowledge-Guided Cross-Modal Alignment and Progressive Fusion for Chest X-Ray Report Generation
    Huang, Lili
    Cao, Yiming
    Jia, Pengcheng
    Li, Chenglong
    Tang, Jin
    Li, Chuanfu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 557 - 567
  • [9] Solar Fusion Net: Enhanced Solar Irradiance Forecasting via Automated Multi-Modal Feature Selection and Cross-Modal Fusion
    Jing, Tao
    Chen, Shanlin
    Navarro-Alarcon, David
    Chu, Yinghao
    Li, Mengying
    IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, 2025, 16 (02) : 761 - 773
  • [10] Kinship verification based on multi-scale feature fusion
    Yan C.
    Liu Y.
    Multimedia Tools and Applications, 2024, 83 (40) : 88069 - 88090