A semiautomatic saliency model and its application to video compression

被引:0
作者
Lyudvichenko, Vitaliy
Erofeev, Mikhail
Gitman, Yury
Vatolin, Dmitriy
机构
来源
2017 13TH IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP) | 2017年
关键词
Eye-Tracking; Saliency; Video Compression; Visual Attention; x264; IMAGE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work aims to apply visual-attention modeling to attention-based video compression. During our comparison we found that eye-tracking data collected even from a single observer outperforms existing automatic models by a significant margin. Therefore, we offer a semiautomatic approach: using computer-vision algorithms and good initial estimation of eye-tracking data from just one observer to produce high-quality saliency maps that are similar to multi-observer eye tracking and that are appropriate for practical applications. We propose a simple algorithm that is based on temporal coherence of the visual-attention distribution and requires eye tracking of just one observer. The results are as good as an average gaze map for two observers. While preparing the saliency-model comparison, we paid special attention to the quality-measurement procedure. We observe that many modern visual-attention models can be improved by applying simple transforms such as brightness adjustment and blending with the center-prior model. The novel quality-evaluation procedure that we propose is invariant to such transforms. To show the practical use of our semiautomatic approach, we developed a saliency-aware modification of the x264 video encoder and performed subjective and objective evaluations. The modified encoder can serve with any attention model and is publicly available.
引用
收藏
页码:403 / 410
页数:8
相关论文
共 50 条
  • [21] Saliency guided Wavelet compression for low-bitrate Image and Video coding
    Barua, Souptik
    Mitra, Kaushik
    Veeraraghavan, Ashok
    2015 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2015, : 1185 - 1189
  • [22] A New Hybrid Model for Video Shot Saliency Extraction
    Fang, Tao
    PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND SERVICE SYSTEM (CSSS), 2014, 109 : 157 - 161
  • [23] Video saliency detection by gestalt theory
    Fang, Yuming
    Zhang, Xiaoqiang
    Yuan, Feiniu
    Imamoglu, Nevrez
    Liu, Haiwen
    PATTERN RECOGNITION, 2019, 96
  • [24] Video saliency detection based on low-level saliency fusion and saliency-aware geodesic
    Li, Weisheng
    Feng, Siqin
    Guan, Hua-Ping
    Zhan, Ziwei
    Gong, Cheng
    JOURNAL OF ELECTRONIC IMAGING, 2019, 28 (01)
  • [25] The application of wavelet theory in video compression
    Wu, B
    Zhao, SH
    Li, SF
    IEEE 2005 International Symposium on Microwave, Antenna, Propagation and EMC Technologies for Wireless Communications Proceedings, Vols 1 and 2, 2005, : 1234 - 1236
  • [26] Camera-Assisted Video Saliency Prediction and Its Applications
    Sun, Xiao
    Hu, Yuxing
    Zhang, Luming
    Chen, Yanxiang
    Li, Ping
    Xie, Zhao
    Liu, Zhenguang
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (09) : 2520 - 2530
  • [27] Deep Saliency Features for Video Saliency Prediction
    Azaza, Aymen
    Douik, Ali
    2018 INTERNATIONAL CONFERENCE ON ADVANCED SYSTEMS AND ELECTRICAL TECHNOLOGIES (IC_ASET), 2017, : 335 - 339
  • [28] Drosophila-Vision-Inspired Motion Perception Model and Its Application in Saliency Detection
    Chen, Zhe
    Mu, Qi
    Han, Guangjie
    Lu, Huimin
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 819 - 830
  • [29] Saliency Guided Adaptive Residue Pre-Processing for Perceptually Based Video Compression
    Shaw, Mark Q.
    Allebach, Jan P.
    Delp, Edward J.
    2014 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2014, : 994 - 998
  • [30] An efficient saliency prediction model for Unmanned Aerial Vehicle video
    Zhang, Kao
    Chen, Zhenzhong
    Li, Songnan
    Liu, Shan
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2022, 194 : 152 - 166