BPI-MVQA: a bi-branch model for medical visual question answering

被引:18
作者
Liu, Shengyan [1 ]
Zhang, Xuejie [2 ]
Zhou, Xiaobing [2 ]
Yang, Jian [2 ]
机构
[1] Kunming Shipborne Equipment Res & Test Ctr, Kunming 650106, Yunnan, Peoples R China
[2] Yunnan Univ, Sch Informat Sci & Engn, 2 North Cuihu Rd, Kunming 650091, Yunnan, Peoples R China
关键词
VQA-Med; Transformer; Parallel structure model; Image retrieval model; Multi-head attention mechanism; CLASSIFICATION;
D O I
10.1186/s12880-022-00800-x
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: Visual question answering in medical domain (VQA-Med) exhibits great potential for enhancing confidence in diagnosing diseases and helping patients better understand their medical conditions. One of the challenges in VQA-Med is how to better understand and combine the semantic features of medical images (e.g., X-rays, Magnetic Resonance Imaging(MRI)) and answer the corresponding questions accurately in unlabeled medical datasets. Method: We propose a novel Bi-branched model based on Parallel networks and Image retrieval for Medical Visual Question Answering (BPI-MVQA). The first branch of BPI-MVQA is a transformer structure based on a parallel network to achieve complementary advantages in image sequence feature and spatial feature extraction, and multi-modal features are implicitly fused by using the multi-head self-attention mechanism. The second branch is retrieving the similarity of image features generated by the VGG16 network to obtain similar text descriptions as labels. Result: The BPI-MVQA model achieves state-of-the-art results on three VQA-Med datasets, and the main metric scores exceed the best results so far by 0.2%, 1.4%, and 1.1%. Conclusion: The evaluation results support the effectiveness of the BPI-MVQA model in VQA-Med. The design of the bi-branch structure helps the model answer different types of visual questions. The parallel network allows for multi-angle image feature extraction, a unique feature extraction method that helps the model better understand the semantic information of the image and achieve greater accuracy in the multi-classification ofVQA-Med. In addition, image retrieval helps the model answer irregular, open-ended type questions from the perspective of understanding the information provided by images. The comparison of our method with state-of-the-art methods on three datasets also shows that our method can bring substantial improvement to the VQA-Med system.
引用
收藏
页数:19
相关论文
共 55 条
[1]  
Abacha A.B., 2018, CLEF (Working Notes).
[2]  
Al-Sadi A, 2020, CLEF WORKING NOTES
[3]  
Allaouzi I., 2018, CLEF WORKING NOTES
[4]  
[Anonymous], 2015, P 28 INT C NEUR INF
[5]   VQA: Visual Question Answering [J].
Antol, Stanislaw ;
Agrawal, Aishwarya ;
Lu, Jiasen ;
Mitchell, Margaret ;
Batra, Dhruv ;
Zitnick, C. Lawrence ;
Parikh, Devi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2425-2433
[6]  
Aronson AR, 2006, METAMAP MAPPING TEXT, V1, P26
[7]   AskHERMES: An online question answering system for complex clinical questions [J].
Cao, YongGang ;
Liu, Feifan ;
Simpson, Pippa ;
Antieau, Lamont ;
Bennett, Andrew ;
Cimino, James J. ;
Ely, John ;
Yu, Hong .
JOURNAL OF BIOMEDICAL INFORMATICS, 2011, 44 (02) :277-288
[8]  
Chung J, 2014, ARXIV
[9]  
Cid YD, 2018, CEUR WORKSHOP PROC
[10]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171