Merging Neuroimaging and Multimedia: Methods, Opportunities, and Challenges

被引:7
作者
Liu, Tianming [1 ,2 ]
Hu, Xintao [3 ]
Li, Xiaojin [3 ]
Chen, Mo [3 ]
Han, Junwei [3 ]
Guo, Lei [3 ]
机构
[1] Univ Georgia, Dept Comp Sci, Athens, GA 30602 USA
[2] Univ Georgia, Bioimaging Res Ctr, Athens, GA 30602 USA
[3] Northwestern Polytech Univ, Sch Automat, Xian 710072, Peoples R China
基金
美国国家卫生研究院; 美国国家科学基金会; 中国国家自然科学基金;
关键词
Brain-computer interface; brain imaging; cognition; multimedia; perception; semantic gaps; BRAIN RESPONSES; TOP-DOWN; VISUAL-ATTENTION; WORKING-MEMORY; NATURAL IMAGES; BOTTOM-UP; FMRI; MUSIC; STATES; EEG;
D O I
10.1109/THMS.2013.2296871
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neuroimaging and brain mapping can provide meaningful guidance to multimedia analyses. and advanced computational multimedia analysis can be used to better understand the functional mechanisms of the human brain. Essentially, brain imaging and brain mapping techniques can serve as a bridge that links the digital representation of multimedia and the perception and comprehension of its content. This paper summarizes methods that integrate brain imaging with multimedia analysis and discusses the opportunities and challenges in this interdisciplinary field. In general, quantitative modeling of brain responses during multimedia comprehension has advanced content-based multimedia studies such as image and video classification and tagging. Multimedia analysis has promoted functional brain mapping by using naturalistic multimedia as stimuli during neuroimaging. Challenges and opportunities in merging neuroimaging and multimedia include the quantification of the brain's responses, the quantification of multimedia, and the mapping between brain responses and computational multimedia features.
引用
收藏
页码:270 / 280
页数:11
相关论文
共 114 条
  • [41] Toward an EEG-Based Recognition of Music Liking Using Time-Frequency Analysis
    Hadjidimitriou, Stelios K.
    Hadjileontiadis, Leontios J.
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2012, 59 (12) : 3498 - 3510
  • [42] An Object-Oriented Visual Saliency Detection Framework Based on Sparse Coding Representations
    Han, Junwei
    He, Sheng
    Qian, Xiaoliang
    Wang, Dongyang
    Guo, Lei
    Liu, Tianming
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2013, 23 (12) : 2009 - 2021
  • [43] Representing and Retrieving Video Shots in Human-Centric Brain Imaging Space
    Han, Junwei
    Ji, Xiang
    Hu, Xintao
    Zhu, Dajiang
    Li, Kaiming
    Jiang, Xi
    Cui, Guangbin
    Guo, Lei
    Liu, Tianming
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (07) : 2723 - 2736
  • [44] Unsupervised extraction of visual attention objects in color images
    Han, JW
    Ngan, KN
    Li, MJ
    Zhang, HH
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2006, 16 (01) : 141 - 145
  • [45] A memory learning framework for effective image retrieval
    Han, JW
    Ngan, KN
    Li, MJ
    Zhang, HJ
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2005, 14 (04) : 511 - 524
  • [46] Affective video content representation and modeling
    Hanjalic, A
    Xu, LQ
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2005, 7 (01) : 143 - 154
  • [47] Neurocinematics: The Neuroscience of Film
    Hasson, Uri
    Landesman, Ohad
    Knappmeyer, Barbara
    Vallines, Ignacio
    Rubin, Nava
    Heeger, David J.
    [J]. PROJECTIONS-THE JOURNAL FOR MOVIES AND MIND, 2008, 2 (01) : 1 - 26
  • [48] Reliability of cortical activity during natural stimulation
    Hasson, Uri
    Malach, Rafael
    Heeger, David J.
    [J]. TRENDS IN COGNITIVE SCIENCES, 2010, 14 (01) : 40 - 48
  • [49] Distributed and overlapping representations of faces and objects in ventral temporal cortex
    Haxby, JV
    Gobbini, MI
    Furey, ML
    Ishai, A
    Schouten, JL
    Pietrini, P
    [J]. SCIENCE, 2001, 293 (5539) : 2425 - 2430
  • [50] Face recognition using Laplacianfaces
    He, XF
    Yan, SC
    Hu, YX
    Niyogi, P
    Zhang, HJ
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2005, 27 (03) : 328 - 340