Prediction of visual attention with deep CNN on artificially degraded videos for studies of attention of patients with Dementia

被引:0
|
作者
Souad Chaabouni
Jenny Benois-pineau
François Tison
Chokri Ben Amar
Akka Zemmari
机构
[1] University of Bordeaux,LaBRI UMR 5800
[2] University of Sfax,REGIM
[3] CHU de Bordeaux-GH Pellegrin,Lab LR11ES48
来源
Multimedia Tools and Applications | 2017年 / 76卷
关键词
Deep CNN; Saliency; Normal video; Degraded video; Dementia diseases;
D O I
暂无
中图分类号
学科分类号
摘要
Studies of visual attention of patients with Dementia such as Parkinson’s Disease Dementia and Alzheimer Disease is a promising way for non-invasive diagnostics. Past research showed, that people suffering from dementia are not reactive with regard to degradations on still images. Attempts are being made to study their visual attention relatively to the video content. Here the delays in their reactions on novelty and “unusual” novelty of the visual scene are expected. Nevertheless, large-scale screening of population is possible only if sufficiently robust automatic prediction models can be built. In the medical protocols the detection of Dementia behavior in visual content observation is always performed in comparison with healthy, “normal control” subjects. Hence, it is a research question per see as to develop an automatic prediction models for specific visual content to use in psycho-visual experience involving Patients with Dementia (PwD). The difficulty of such a prediction resides in a very small amount of training data. In this paper the reaction of healthy normal control subjects on degraded areas in videos was studied. Furthermore, in order to build an automatic prediction model for salient areas in intentionally degraded videos for PwD studies, a deep learning architecture was designed. Optimal transfer learning strategy for training the model in case of very small amount of training data was deployed. The comparison with gaze fixation maps and classical visual attention prediction models was performed. Results are interesting regarding the reaction of normal control subjects against degraded areas in videos.
引用
收藏
页码:22527 / 22546
页数:19
相关论文
共 23 条
  • [1] Prediction of visual attention with deep CNN on artificially degraded videos for studies of attention of patients with Dementia
    Chaabouni, Souad
    Benois-Pineau, Jenny
    Tison, Francois
    Ben Amar, Chokri
    Zemmari, Akka
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (21) : 22527 - 22546
  • [2] Visual attention based surveillance videos compression
    Guraya, Fahad Fazal Elahi
    Medina, Victor
    Cheikh, Faouzi Alaya
    COLOR SCIENCE AND ENGINEERING SYSTEMS, TECHNOLOGIES, AND APPLICATIONS: TWENTIETH COLOR AND IMAGING CONFERENCE, 2012, : 2 - 8
  • [3] SALYPATH: A DEEP-BASED ARCHITECTURE FOR VISUAL ATTENTION PREDICTION
    Kerkouri, Mohamed A.
    Tliba, Marouane
    Chetouani, Aladine
    Harba, Rachid
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1464 - 1468
  • [4] HOW SOUND AFFECTS VISUAL ATTENTION IN OMNIDIRECTIONAL VIDEOS
    Li, Jie
    Zhai, Guangtao
    Zhu, Yucheng
    Zhou, Jun
    Zhang, Xiao-Ping
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3066 - 3070
  • [5] Visual Attention: Deep Rare Features
    Mancas, Matei
    Kong, Phutphalla
    Gosselin, Bernard
    2020 JOINT 9TH INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) AND 2020 4TH INTERNATIONAL CONFERENCE ON IMAGING, VISION & PATTERN RECOGNITION (ICIVPR), 2020,
  • [6] Visual Attention with Deep Neural Networks
    Canziani, Alfredo
    Culurciello, Eugenio
    2015 49TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2015,
  • [7] Neural networks based visual attention model for surveillance videos
    Guraya, Fahad Fazal Elahi
    Cheikh, Faouzi Alaya
    NEUROCOMPUTING, 2015, 149 : 1348 - 1359
  • [8] Using visual attention estimation on videos for automated prediction of autism spectrum disorder and symptom severity in preschool children
    de Belen, Ryan Anthony J.
    Eapen, Valsamma
    Bednarz, Tomasz
    Sowmya, Arcot
    PLOS ONE, 2024, 19 (02):
  • [9] Visual Attention Analysis and Prediction on Human Faces with Mole
    Wei, Qianqian
    Zhai, Guangtao
    Hu, Chunjia
    Min, Xiongkuo
    2016 30TH ANNIVERSARY OF VISUAL COMMUNICATION AND IMAGE PROCESSING (VCIP), 2016,
  • [10] Visual attention prediction for images with leading line structure
    Mochizuki, Issei
    Toyoura, Masahiro
    Mao, Xiaoyang
    VISUAL COMPUTER, 2018, 34 (6-8): : 1031 - 1041