Direction Relation Transformer for Image Captioning

被引:17
作者
Song, Zeliang [1 ,2 ]
Zhou, Xiaofei [1 ,2 ]
Dong, Linhua [1 ,2 ]
Tan, Jianlong [1 ,2 ]
Guo, Li [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021 | 2021年
基金
中国国家自然科学基金;
关键词
Image Captioning; Direction Relation Transformer; Multi-Head Attention; Direction Embedding;
D O I
10.1145/3474085.3475607
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image captioning is a challenging task that combines computer vision and natural language processing for generating a textual description of the content within an image. Recently, Transformerbased encoder-decoder architectures have shown great success in image captioning, where multi-head attention mechanism is utilized to capture the contextual interactions between object regions. However, such methods regard region features as a bag of tokens without considering the directional relationships between them, making it hard to understand the relative position between objects in the image and generate correct captions effectively. In this paper, we propose a novel Direction Relation Transformer to improve the orientation perception between visual features by incorporating the relative direction embedding into multi-head attention, termed DRT. We first generate the relative direction matrix according to the positional information of the object regions, and then explore three forms of direction-aware multi-head attention to integrate the direction embedding into Transformer architecture. We conduct experiments on challenging Microsoft COCO image captioning benchmark. The quantitative and qualitative results demonstrate that, by integrating the relative directional relation, our proposed approach achieves significant improvements over all evaluation metrics compared with baseline model, e.g., DRT improves taskspecific metric CIDEr score from 129.7% to 133.2% on the offline '' Karpathy '' test split.
引用
收藏
页码:5056 / 5064
页数:9
相关论文
共 35 条
[21]   Look Back and Predict Forward in Image Captioning [J].
Qin, Yu ;
Du, Jiajun ;
Zhang, Yonghua ;
Lu, Hongtao .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :8359-8367
[22]   Design a robust controller for active queue management in large delay networks [J].
Ren, FY ;
Lin, C ;
Wei, B .
ISCC2004: NINTH INTERNATIONAL SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS, VOLS 1 AND 2, PROCEEDINGS, 2004, :748-754
[23]   Self-critical Sequence Training for Image Captioning [J].
Rennie, Steven J. ;
Marcheret, Etienne ;
Mroueh, Youssef ;
Ross, Jerret ;
Goel, Vaibhava .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1179-1195
[24]  
Shaw Peter, 2018, P 2018 C N AM CHAPT, DOI DOI 10.18653/V1/N18-2074
[25]  
Song Zeliang, 2021, IEEE INT C MULT EXP
[26]  
Song Zeliang, 2021, P AAAI C ART INT
[27]  
Vaswani A, 2017, ADV NEUR IN, V30
[28]  
Vedantam R, 2015, PROC CVPR IEEE, P4566, DOI 10.1109/CVPR.2015.7299087
[29]  
Vinyals O, 2015, PROC CVPR IEEE, P3156, DOI 10.1109/CVPR.2015.7298935
[30]  
Wang SZ, 2020, SUPERCRITICAL WATER PROCESSING TECHNOLOGIES FOR ENVIRONMENT, ENERGY AND NANOMATERIAL APPLICATIONS, P327, DOI 10.1007/978-981-13-9326-6_9