Subtitle Positioning for E-learning Videos Based on Rough Gaze Estimation and Saliency Detection

被引:32
作者
Jiang, Bo [1 ]
Liu, Sijiang [1 ]
He, Liping [1 ]
Wu, Weimin [1 ]
Chen, Hongli [1 ]
Shen, Yunfei [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Educ Sci & Technol, Nanjing, Jiangsu, Peoples R China
来源
SIGGRAPH ASIA 2017 POSTERS (SA'17) | 2017年
关键词
subtitle positioning; gaze estimation; saliency detection;
D O I
10.1145/3145690.3145735
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Subtitle is very common shown in a variety categories of videos, especially useful as translated subtitles for native speakers. Traditional subtitle is placed at the bottom of videos in order to prevent from occluding essential video contents. However, traversing between important video contents and subtitle frequently will have a negative impact on focusing watching video itself. Recently, some research work try more flexible subtitle positioning strategy. However, these methods are effective with restrictions on the video content and devices adopted. In this work, we propose a novel subtitle content organization and placement framework based on rough gaze estimation and saliency detection.
引用
收藏
页数:2
相关论文
共 7 条
  • [1] Chen Hongli, 2017, GAZE INSPIRED SUBTIT, DOI [10.1117/12.2280281, DOI 10.1117/12.2280281]
  • [2] Efficient Salient Region Detection with Soft Image Abstraction
    Cheng, Ming-Ming
    Warrell, Jonathan
    Lin, Wen-Yan
    Zheng, Shuai
    Vineet, Vibhav
    Crook, Nigel
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 1529 - 1536
  • [3] Face++, 2017, FAC LANDM LOC KEYP F
  • [4] Speaker-Following Video Subtitles
    Hu, Yongtao
    Kautz, Jan
    Yu, Yizhou
    Wang, Wenping
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2014, 11 (02)
  • [5] TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets
    Huang, Qiong
    Veeraraghavan, Ashok
    Sabharwal, Ashutosh
    [J]. MACHINE VISION AND APPLICATIONS, 2017, 28 (5-6) : 445 - 461
  • [6] Close to the Action: Eye-Tracking Evaluation of Speaker-Following Subtitles
    Kurzhals, Kuno
    Cetinkaya, Emine
    Hu, Yongtao
    Wang, Wenping
    Weiskopf, Daniel
    [J]. PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17), 2017, : 6559 - 6568
  • [7] Realtime 3D Eye Gaze Animation Using a Single RGB Camera
    Wang, Congyi
    Shi, Fuhao
    Xia, Shihong
    Chai, Jinxiang
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2016, 35 (04):