Medical visual question answering based on question-type reasoning and semantic space constraint

被引:11
作者
Wang, Meiling [1 ]
He, Xiaohai [1 ]
Liu, Luping [1 ]
Qing, Linbo [1 ]
Chen, Honggang [1 ]
Liu, Yan [2 ]
Ren, Chao [1 ]
机构
[1] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610065, Sichuan, Peoples R China
[2] Southwest Jiaotong Univ, Dept Neurol, Affiliated Hosp, Peoples Hosp 3, Chengdu, Sichuan, Peoples R China
基金
中国国家自然科学基金;
关键词
Medical visual question answering; Question -type reasoning; Semantic space constraint; Attention mechanism; DYNAMIC MEMORY NETWORKS; LANGUAGE;
D O I
10.1016/j.artmed.2022.102346
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Medical visual question answering (Med-VQA) aims to accurately answer clinical questions about medical images. Despite its enormous potential for application in the medical domain, the current technology is still in its infancy. Compared with general visual question answering task, Med-VQA task involve more demanding challenges. First, clinical questions about medical images are usually diverse due to different clinicians and the complexity of diseases. Consequently, noise is inevitably introduced when extracting question features. Second, Med-VQA task have always been regarded as a classification problem for predefined answers, ignoring the relationships between candidate responses. Thus, the Med-VQA model pays equal attention to all candidate answers when predicting answers. In this paper, a novel Med-VQA framework is proposed to alleviate the abovementioned problems. Specifically, we employed a question-type reasoning module severally to closed-ended and open-ended questions, thereby extracting the important information contained in the questions through an attention mechanism and filtering the noise to extract more valuable question features. To take advantage of the relational information between answers, we designed a semantic constraint space to calculate the similarity between the answers and assign higher attention to answers with high correlation. To evaluate the effectiveness of the proposed method, extensive experiments were conducted on a public dataset, namely VQA-RAD. Experimental results showed that the proposed method achieved better performance compared to other the state-ofthe-art methods. The overall accuracy, closed-ended accuracy, and open-ended accuracy reached 74.1 %, 82.7 %, and 60.9 %, respectively. It is worth noting that the absolute accuracy of the proposed method improved by 5.5 % for closed-ended questions.
引用
收藏
页数:11
相关论文
共 68 条
  • [1] Abacha A, 2019, P CLEF C LABS EVALUA, P09
  • [2] Andreas J., 2016, NAACL, P1545
  • [3] Neural Module Networks
    Andreas, Jacob
    Rohrbach, Marcus
    Darrell, Trevor
    Klein, Dan
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 39 - 48
  • [4] [Anonymous], 2016, C EMP METH NAT LANG
  • [5] [Anonymous], 2015, P INT C NEUR INF PRO
  • [6] [Anonymous], 2018, CLEF
  • [7] Antipov G, 2020, PREPRINTS
  • [8] VQA: Visual Question Answering
    Antol, Stanislaw
    Agrawal, Aishwarya
    Lu, Jiasen
    Mitchell, Margaret
    Batra, Dhruv
    Zitnick, C. Lawrence
    Parikh, Devi
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2425 - 2433
  • [9] Cho K., 2014, P C EMP METH NAT LAN, P1724
  • [10] de Marneffe MC., 2008, COLING 2008 P WORKSH, P1