How is Gaze Influenced by Image Transformations? Dataset and Model

被引:70
作者
Che, Zhaohui [1 ]
Borji, Ali [2 ]
Zhai, Guangtao [1 ]
Min, Xiongkuo [1 ]
Guo, Guodong [3 ,4 ]
Le Callet, Patrick [5 ]
机构
[1] Shanghai Jiao Tong Univ, Inst Image Commun & Network Engn, Shanghai Key Lab Digital Media Proc & Transmiss, Shanghai 200240, Peoples R China
[2] MarkableAI Inc, Brooklyn, NY 11201 USA
[3] Baidu Res, Inst Deep Learning, Beijing 100193, Peoples R China
[4] Natl Engn Lab Deep Learning Technol & Applicat, Beijing, Peoples R China
[5] Univ Nantes, Lab Sci Numer Nantes, Equipe Image Percept & Interact, F-44035 Nantes, France
基金
美国国家科学基金会; 中国博士后科学基金;
关键词
Data models; Observers; Image resolution; Visualization; Mathematical model; Semantics; Robustness; Human gaze; saliency prediction; data augmentation; generative adversarial networks; model robustness; VISUAL-ATTENTION; SALIENCY;
D O I
10.1109/TIP.2019.2945857
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time-consuming and expensive. Most of current studies on human attention and saliency modeling have used high-quality stereotype stimuli. In real world, however, captured images undergo various types of transformations. Can we use these transformations to augment existing saliency datasets? Here, we first create a novel saliency dataset including fixations of 10 observers over 1900 images degraded by 19 types of transformations. Second, by analyzing eye movements, we find that observers look at different locations over transformed versus original images. Third, we utilize the new data over transformed images, called data augmentation transformation (DAT), to train deep saliency models. We find that label-preserving DATs with negligible impact on human gaze boost saliency prediction, whereas some other DATs that severely impact human gaze degrade the performance. These label-preserving valid augmentation transformations provide a solution to enlarge existing saliency datasets. Finally, we introduce a novel saliency model based on generative adversarial networks (dubbed GazeGAN). A modified U-Net is utilized as the generator of the GazeGAN, which combines classic "skip connection" with a novel "center-surround connection" (CSC) module. Our proposed CSC module mitigates trivial artifacts while emphasizing semantic salient regions, and increases model nonlinearity, thus demonstrating better robustness against transformations. Extensive experiments and comparisons indicate that GazeGAN achieves state-of-the-art performance over multiple datasets. We also provide a comprehensive comparison of 22 saliency models on various transformed scenes, which contributes a new robustness benchmark to saliency community. Our code and dataset are available at: https://github.com/CZHQuality/Sal-CFS-GAN.
引用
收藏
页码:2287 / 2300
页数:14
相关论文
共 52 条
[1]  
[Anonymous], 2005, THESIS
[2]  
[Anonymous], SALICON SALIENCY PRE
[3]  
[Anonymous], 2016, ARXIV160600110
[4]  
[Anonymous], 2017, ARXIV160403605
[5]  
[Anonymous], 2006, Advances in neural information processing systems
[6]  
[Anonymous], CODE COMPUTING VISUA
[7]  
[Anonymous], MIT saliency benchmark
[8]  
[Anonymous], 2017, ARXIV170101081
[9]  
Bastani O, 2016, ADV NEUR IN, V29
[10]  
Borji A., IEEE T PATTERN ANAL