Keyframe extraction from laparoscopic videos based on visual saliency detection

被引:28
作者
Loukas, Constantinos [1 ]
Varytimidis, Christos [2 ]
Rapantzikos, Konstantinos [2 ]
Kanakis, Meletios A. [3 ]
机构
[1] Univ Athens, Med Sch, Lab Med Phys, Mikras Asias 75 Str, Athens 11527, Greece
[2] Natl & Tech Univ Athens, Sch Elect & Comp Engn, Athens, Greece
[3] Great Ormond St Hosp Sick Children, Cardiothorac Surg Unit, London, England
关键词
Video analysis; Keyframe extraction; Hidden Markov multivariate autoregressive models; Visual saliency; ATTENTION DRIVEN FRAMEWORK; ENDOSCOPIC SURGERY VIDEOS; KEY-FRAMES; CLASSIFICATION; MODELS;
D O I
10.1016/j.cmpb.2018.07.004
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and objective: Laparoscopic surgery offers the potential for video recording of the operation, which is important for technique evaluation, cognitive training, patient briefing and documentation. An effective way for video content representation is to extract a limited number of keyframes with semantic information. In this paper we present a novel method for keyframe extraction from individual shots of the operational video. Methods: The laparoscopic video was first segmented into video shots using an objectness model, which was trained to capture significant changes in the endoscope field of view. Each frame of a shot was then decomposed into three saliency maps in order to model the preference of human vision to regions with higher differentiation with respect to color, motion and texture. The accumulated responses from each map provided a 3D time series of saliency variation across the shot. The time series was modeled as a multivariate autoregressive process with hidden Markov states (HMMAR model). This approach allowed the temporal segmentation of the shot into a predefined number of states. A representative keyframe was extracted from each state based on the highest state-conditional probability of the corresponding saliency vector. Results: Our method was tested on 168 video shots extracted from various laparoscopic cholecystectomy operations from the publicly available Cholec80 dataset. Four state-of-the-art methodologies were used for comparison. The evaluation was based on two assessment metrics: Color Consistency Score (CCS), which measures the color distance between the ground truth (GT) and the closest keyframe, and Temporal Consistency Score (TCS), which considers the temporal proximity between GT and extracted keyframes. About 81% of the extracted keyframes matched the color content of the GT keyframes, compared to 77% yielded by the second-best method. The TCS of the proposed and the second-best method was close to 1.9 and 1.4 respectively. Conclusions: Our results demonstrated that the proposed method yields superior performance in terms of content and temporal consistency to the ground truth. The extracted keyframes provided highly semantic information that may be used for various applications related to surgical video content representation, such as workflow analysis, video summarization and retrieval. (C) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:13 / 23
页数:11
相关论文
共 43 条
[1]   Measuring the Objectness of Image Windows [J].
Alexe, Bogdan ;
Deselaers, Thomas ;
Ferrari, Vittorio .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2189-2202
[2]   Learning Semantic and Visual Similarity for Endomicroscopy Video Retrieval [J].
Andre, Barbara ;
Vercauteren, Tom ;
Buchner, Anna M. ;
Wallace, Michael B. ;
Ayache, Nicholas .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2012, 31 (06) :1276-1288
[3]  
Angadi S., 2013, LECT NOTES ELECT ENG, P81, DOI [DOI 10.1007/978-81-322-0997-3_8, 10.1007/978-81-322-0997-3_8]
[4]   Log-Gabor Filters for Image-Based Vehicle Verification [J].
Arrospide, Jon ;
Salgado, Luis .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (06) :2286-2295
[5]   Hidden Markov based autoregressive analysis of stationary and non-stationary electrophysiological signals for functional coupling studies [J].
Cassidy, MJ ;
Brown, P .
JOURNAL OF NEUROSCIENCE METHODS, 2002, 116 (01) :35-53
[6]   Global Contrast Based Salient Region Detection [J].
Cheng, Ming-Ming ;
Mitra, Niloy J. ;
Huang, Xiaolei ;
Torr, Philip H. S. ;
Hu, Shi-Min .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (03) :569-582
[7]   Towards Scalable Summarization of Consumer Videos Via Sparse Dictionary Selection [J].
Cong, Yang ;
Yuan, Junsong ;
Luo, Jiebo .
IEEE TRANSACTIONS ON MULTIMEDIA, 2012, 14 (01) :66-75
[8]   UNCERTAINTY RELATION FOR RESOLUTION IN SPACE, SPATIAL-FREQUENCY, AND ORIENTATION OPTIMIZED BY TWO-DIMENSIONAL VISUAL CORTICAL FILTERS [J].
DAUGMAN, JG .
JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 1985, 2 (07) :1160-1169
[9]   MRT letter: Visual attention driven framework for hysteroscopy video abstraction [J].
Ejaz, Naveed ;
Mehmood, Irfan ;
Baik, Sung Wook .
MICROSCOPY RESEARCH AND TECHNIQUE, 2013, 76 (06) :559-563
[10]   Efficient visual attention based framework for extracting key frames from videos [J].
Ejaz, Naveed ;
Mehmood, Irfan ;
Baik, Sung Wook .
SIGNAL PROCESSING-IMAGE COMMUNICATION, 2013, 28 (01) :34-44