Self Supervision for Attention Networks

被引:3
作者
Patro, Badri N. [1 ,4 ]
Kasturi, G. S. [2 ,5 ]
Jain, Ansh [2 ]
Namboodiri, Vinay P. [3 ]
机构
[1] IIT Kanpur, Kanpur, Uttar Pradesh, India
[2] NSUT, Delhi, India
[3] Univ Bath, Bath, Avon, England
[4] Google, Mountain View, CA 94043 USA
[5] Netaji Subhas Univ Technol, Delhi, India
来源
2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021) | 2021年
关键词
D O I
10.1109/WACV48630.2021.00077
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, the attention mechanism has become a fairly popular concept and has proven to be successful in many machine learning applications. However, deep learning models do not employ supervision for these attention mechanisms which can improve the model's performance significantly. Therefore, in this paper, we tackle this limitation and propose a novel method to improve the attention mechanism by inducing "self-supervision". We devise a technique to generate desirable attention maps for any model that utilizes an attention module. This is achieved by examining the model's output for different regions sampled from the input and obtaining the attention probability distributions that enhance the proficiency of the model. The attention distributions thus obtained are used for supervision. We rely on the fact, that attenuation of the unimportant parts, allows a model to attend to more salient regions, thus strengthening the prediction accuracy. The quantitative and qualitative results published in this paper show that this method successfully improves the attention mechanism as well as the model's accuracy. In addition to the task of Visual Question Answering(VQA), we also show results on the task of Image classification and Text classification to prove that our method can be generalized to any vision and language model that uses an attention module.
引用
收藏
页码:726 / 735
页数:10
相关论文
共 29 条
[11]   5G will Popularize Virtual and Augmented Reality : KT's Trials for World's First 5G Olympics in PyeongChang [J].
Jun, Seung-Hwa ;
Kim, Jung-Ho .
ICEC'17: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ELECTRONIC COMMERCE, 2017,
[12]  
Kim JH, 2018, ADV NEUR IN, V31
[13]  
Krizhevsky A, 2009, CIFAR 10
[14]   Microsoft COCO: Common Objects in Context [J].
Lin, Tsung-Yi ;
Maire, Michael ;
Belongie, Serge ;
Hays, James ;
Perona, Pietro ;
Ramanan, Deva ;
Dollar, Piotr ;
Zitnick, C. Lawrence .
COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 :740-755
[15]  
Lin Z., 2017, P 5 INT C LEARN REPR, P1
[16]  
Malinowski Mateusz., 2014, P 27 INT C NEUR INF, V27, P1682
[17]  
Patro Badri, IEEE C COMP VIS PATT
[18]  
Patro BN, 2020, AAAI CONF ARTIF INTE, V34, P11848
[19]  
Petsiuk V., 2018, BRIT MACH VIS C 2018
[20]  
Qiao TT, 2018, AAAI CONF ARTIF INTE, P7300