ViOCRVQA: novel benchmark dataset and VisionReader for visual question answering by understanding Vietnamese text in images

被引:0
作者
Pham, Huy Quang [1 ,2 ]
Nguyen, Thang Kien-Bao [1 ,2 ]
Nguyen, Quan Van [1 ,2 ]
Tran, Dan Quang [1 ,2 ]
Nguyen, Nghia Hieu [1 ,2 ]
Nguyen, Kiet Van [1 ,2 ]
Nguyen, Ngan Luu-Thuy [1 ,2 ]
机构
[1] Univ Informat Technol, Fac Informat Sci & Engn, Ho Chi Minh City, Vietnam
[2] Vietnam Natl Univ, Ho Chi Minh City, Vietnam
关键词
OCR-VQA; Visual question answering; VQA dataset; OCR;
D O I
10.1007/s00530-025-01696-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Optical Character Recognition-Visual Question Answering (OCR-VQA) is the task of answering text information contained in images that have been significantly developed in the English language in recent years. However, there are limited studies of this task in low-resource languages such as Vietnamese. To this end, we introduce a novel dataset, ViOCRVQA (Vietnamese Optical Character Recognition-Visual Question Answering dataset), consisting of 28,000+ images and 120,000+ question-answer pairs. In this dataset, all the images contain text and questions about the information relevant to the text in the images. We deploy ideas from state-of-the-art methods proposed for English to conduct experiments on our dataset, revealing the challenges and difficulties inherent in a Vietnamese dataset. Furthermore, we introduce a novel approach, called VisionReader, which achieved 41.16% in EM and 69.90% in the F1-score on test dataset. The results showed that the OCR system plays an important role in VQA models on the ViOCRVQA dataset. In addition, the objects in the image also play a role in improving model performance. We open access to our dataset at https://github.com/qhnhynmm/ViOCRVQA.git for further research in OCR-VQA task in Vietnamese. The code for the proposed method, along with the models utilized in the experimental evaluation, is available at the following https://github.com/minhquan6203/VisionReader.git.
引用
收藏
页数:22
相关论文
共 68 条
  • [61] Touvron H., 2023, arXiv, DOI [arXiv:2302.13971, 10.48550/arXiv.2302.13971]
  • [62] Tran K. Q., 2021, P 35 PACIFIC ASIA C, P683
  • [63] Tran KV, 2023, Arxiv, DOI arXiv:2310.18046
  • [64] Wang W., 2024, Adv. Neural Inf. Process. Syst., V36
  • [65] Image Captioning and Visual Question Answering Based on Attributes and External Knowledge
    Wu, Qi
    Shen, Chunhua
    Wang, Peng
    Dick, Anthony
    van den Hengel, Anton
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (06) : 1367 - 1381
  • [66] A question-guided multi-hop reasoning graph network for visual question answering
    Xu, Zhaoyang
    Gu, Jinguang
    Liu, Maofu
    Zhou, Guangyou
    Fu, Haidong
    Qiu, Chen
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (02)
  • [67] VinVL: Revisiting Visual Representations in Vision-Language Models
    Zhang, Pengchuan
    Li, Xiujun
    Hu, Xiaowei
    Yang, Jianwei
    Zhang, Lei
    Wang, Lijuan
    Choi, Yejin
    Gao, Jianfeng
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5575 - 5584
  • [68] Visual7W: Grounded Question Answering in Images
    Zhu, Yuke
    Groth, Oliver
    Bernstein, Michael
    Li Fei-Fei
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 4995 - 5004