An Automatic Video Reinforcing System for TV Programs using Semantic Metadata from Closed Captions

被引:4
|
作者
Wang, Yuanyuan [1 ]
Kitayama, Daisuke [2 ]
Kawai, Yukiko [3 ]
Sumiya, Kazutoshi [4 ]
Ishikawa, Yoshiharu [5 ]
机构
[1] Yamaguchi Univ, Grad Sch Sci & Engn, Ube, Yamaguchi, Japan
[2] Kogakuin Univ, Fac Informat Studies, Tokyo, Japan
[3] Kyoto Sangyo Univ, Kyoto, Japan
[4] Kwansei Gakuin Univ, Sch Policy Studies, Sanda, Japan
[5] Nagoya Univ, Grad Sch Informat Sci, Nagoya, Aichi, Japan
关键词
Geographical Metadata; Geographical Relationships; Media Synchronization Mechanism; Popularity Rating; Scene Detection; Topic Extraction; Topical Metadata; Video Reconstruction Mechanism;
D O I
10.4018/IJMDEM.2016010101
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
There are various TV programs such as travel and educational programs. While watching TV programs, viewers often search related information about the programs through the Web. Nevertheless, as TV programs keep playing, viewers possibly miss some important scenes when searching the Web. As a result, their enjoyment would be spoiled. Another problem is that there are various topics in each scene of a video, and viewers usually have different levels of knowledge. Thus, it is important to detect topics in videos and supplement videos with related information automatically. In this paper, the authors propose a novel automatic video reinforcing system with two functions: (1) a media synchronization mechanism, which presents supplementary information synchronized with videos, in order to enable viewers to effectively understand the geographic data in videos; (2) a video reconstruction mechanism, which generates new video contents based on viewers' interests and knowledge by adding and removing scenes, in order to enable viewers to enjoy the generated videos without additional search.
引用
收藏
页码:1 / 21
页数:21
相关论文
共 50 条
  • [1] Automatic Street View System Synchronized with TV Program using Geographical Metadata from Closed Captions
    Wang, Yuanyuan
    Kitayama, Daisuke
    Kawai, Yukiko
    Sumiya, Kazutoshi
    PROCEEDINGS OF THE 2014 INTERNATIONAL WORKING CONFERENCE ON ADVANCED VISUAL INTERFACES, AVI 2014, 2014, : 383 - 384
  • [2] Automatic generation of a multimedia encyclopedia from TV programs by using closed captions and detecting principal video objects
    Miura, Kikuka
    Yamada, Ichiro
    Sumiyoshi, Hideki
    Yagi, Nobuyuki
    ISM 2006: EIGHTH IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA, PROCEEDINGS, 2006, : 873 - 880
  • [3] Summarization of video programs based on closed captions
    Agnihotri, L
    Devara, K
    McGee, T
    Dimitrova, N
    STORAGE AND RETRIEVAL FOR MEDIA DATABASES 2001, 2001, 4315 : 599 - 607
  • [4] Attentive Semantic Video Generation using Captions
    Marwah, Tanya
    Mittal, Gaurav
    Balasubramanian, Vineeth N.
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1435 - 1443
  • [5] Automatic closed captions and subtitles in academic video presentations: possibilities and shortcomings
    Veroz-Gonzalez, Ma Azahara
    Bernal, Ma Pilar Castillo
    COMPLUTENSE JOURNAL OF ENGLISH STUDIES, 2024, 32
  • [6] Using Closed Captions as Supervision for Video Activity Recognition
    Gupta, Sonal
    Mooney, Raymond J.
    PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10), 2010, : 1083 - 1088
  • [7] Where do those TV closed captions come from?
    Carvell, T
    FORTUNE, 1999, 139 (08) : 57 - 57
  • [8] From raw video data to semantic content in TV formula 1 programs
    Mihajlovic, V
    Petkovic, M
    Jonker, W
    Djordjevic-Kajan, S
    6TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL XIV, PROCEEDINGS: IMAGE, ACOUSTIC, SPEECH AND SIGNAL PROCESSING III, 2002, : 88 - 93
  • [9] Learning Video Preferences Using Visual Features and Closed Captions
    Brezeale, Darin
    Cook, Diane J.
    IEEE MULTIMEDIA, 2009, 16 (03) : 39 - 47
  • [10] Semantic modeffing using TV-Anytime genre metadata
    Butkus, Andrius
    Petersen, Michael
    INTERACTIVE TV: A SHARED EXPERIENCE, PROCEEDING, 2007, 4471 : 226 - +