ViOCRVQA: novel benchmark dataset and VisionReader for visual question answering by understanding Vietnamese text in images

被引:0
作者
Pham, Huy Quang [1 ,2 ]
Nguyen, Thang Kien-Bao [1 ,2 ]
Nguyen, Quan Van [1 ,2 ]
Tran, Dan Quang [1 ,2 ]
Nguyen, Nghia Hieu [1 ,2 ]
Nguyen, Kiet Van [1 ,2 ]
Nguyen, Ngan Luu-Thuy [1 ,2 ]
机构
[1] Univ Informat Technol, Fac Informat Sci & Engn, Ho Chi Minh City, Vietnam
[2] Vietnam Natl Univ, Ho Chi Minh City, Vietnam
关键词
OCR-VQA; Visual question answering; VQA dataset; OCR;
D O I
10.1007/s00530-025-01696-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Optical Character Recognition-Visual Question Answering (OCR-VQA) is the task of answering text information contained in images that have been significantly developed in the English language in recent years. However, there are limited studies of this task in low-resource languages such as Vietnamese. To this end, we introduce a novel dataset, ViOCRVQA (Vietnamese Optical Character Recognition-Visual Question Answering dataset), consisting of 28,000+ images and 120,000+ question-answer pairs. In this dataset, all the images contain text and questions about the information relevant to the text in the images. We deploy ideas from state-of-the-art methods proposed for English to conduct experiments on our dataset, revealing the challenges and difficulties inherent in a Vietnamese dataset. Furthermore, we introduce a novel approach, called VisionReader, which achieved 41.16% in EM and 69.90% in the F1-score on test dataset. The results showed that the OCR system plays an important role in VQA models on the ViOCRVQA dataset. In addition, the objects in the image also play a role in improving model performance. We open access to our dataset at https://github.com/qhnhynmm/ViOCRVQA.git for further research in OCR-VQA task in Vietnamese. The code for the proposed method, along with the models utilized in the experimental evaluation, is available at the following https://github.com/minhquan6203/VisionReader.git.
引用
收藏
页数:22
相关论文
共 68 条
  • [1] Alayrac JB, 2022, ADV NEUR IN
  • [2] Anil R, 2023, Arxiv, DOI [arXiv:2305.10403, DOI 10.48550/ARXIV.2305.10403]
  • [3] VQA: Visual Question Answering
    Antol, Stanislaw
    Agrawal, Aishwarya
    Lu, Jiasen
    Mitchell, Margaret
    Batra, Dhruv
    Zitnick, C. Lawrence
    Parikh, Devi
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2425 - 2433
  • [4] LaTr: Layout-Aware Transformer for Scene-Text VQA
    Biten, Ali Furkan
    Litman, Ron
    Xie, Yusheng
    Appalaraju, Srikar
    Manmatha, R.
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16527 - 16537
  • [5] Scene Text Visual Question Answering
    Biten, Ali Furkan
    Tito, Ruben
    Mafla, Andres
    Gomez, Lluis
    Rusinol, Marcal
    Valveny, Ernest
    Jawahar, C. V.
    Karatzas, Dimosthenis
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4290 - 4300
  • [6] Brown TB, 2020, ADV NEUR IN, V33
  • [7] VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning
    Chen, Jun
    Guo, Han
    Yi, Kai
    Li, Boyang
    Elhoseiny, Mohamed
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 18009 - 18019
  • [8] Chen K, 2016, Arxiv, DOI arXiv:1511.05960
  • [9] QUANTUM LONG SHORT-TERM MEMORY
    Chen, Samuel Yen-Chi
    Yoo, Shinjae
    Fang, Yao-Lung L.
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8622 - 8626
  • [10] Chen YC, 2019, AEBMR ADV ECON, V106, P104, DOI 10.1007/978-3-030-58577-8_7