Keyframe extraction from laparoscopic videos based on visual saliency detection

被引:26
作者
Loukas, Constantinos [1 ]
Varytimidis, Christos [2 ]
Rapantzikos, Konstantinos [2 ]
Kanakis, Meletios A. [3 ]
机构
[1] Univ Athens, Med Sch, Lab Med Phys, Mikras Asias 75 Str, Athens 11527, Greece
[2] Natl & Tech Univ Athens, Sch Elect & Comp Engn, Athens, Greece
[3] Great Ormond St Hosp Sick Children, Cardiothorac Surg Unit, London, England
关键词
Video analysis; Keyframe extraction; Hidden Markov multivariate autoregressive models; Visual saliency; ATTENTION DRIVEN FRAMEWORK; ENDOSCOPIC SURGERY VIDEOS; KEY-FRAMES; CLASSIFICATION; MODELS;
D O I
10.1016/j.cmpb.2018.07.004
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and objective: Laparoscopic surgery offers the potential for video recording of the operation, which is important for technique evaluation, cognitive training, patient briefing and documentation. An effective way for video content representation is to extract a limited number of keyframes with semantic information. In this paper we present a novel method for keyframe extraction from individual shots of the operational video. Methods: The laparoscopic video was first segmented into video shots using an objectness model, which was trained to capture significant changes in the endoscope field of view. Each frame of a shot was then decomposed into three saliency maps in order to model the preference of human vision to regions with higher differentiation with respect to color, motion and texture. The accumulated responses from each map provided a 3D time series of saliency variation across the shot. The time series was modeled as a multivariate autoregressive process with hidden Markov states (HMMAR model). This approach allowed the temporal segmentation of the shot into a predefined number of states. A representative keyframe was extracted from each state based on the highest state-conditional probability of the corresponding saliency vector. Results: Our method was tested on 168 video shots extracted from various laparoscopic cholecystectomy operations from the publicly available Cholec80 dataset. Four state-of-the-art methodologies were used for comparison. The evaluation was based on two assessment metrics: Color Consistency Score (CCS), which measures the color distance between the ground truth (GT) and the closest keyframe, and Temporal Consistency Score (TCS), which considers the temporal proximity between GT and extracted keyframes. About 81% of the extracted keyframes matched the color content of the GT keyframes, compared to 77% yielded by the second-best method. The TCS of the proposed and the second-best method was close to 1.9 and 1.4 respectively. Conclusions: Our results demonstrated that the proposed method yields superior performance in terms of content and temporal consistency to the ground truth. The extracted keyframes provided highly semantic information that may be used for various applications related to surgical video content representation, such as workflow analysis, video summarization and retrieval. (C) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:13 / 23
页数:11
相关论文
共 50 条
  • [31] Visual saliency detection based on region descriptors and prior knowledge
    Wang, Weining
    Cai, Dong
    Xu, Xiangmin
    Liew, Alan Wee-Chung
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2014, 29 (03) : 424 - 433
  • [32] Detection of tiny detect in radiographic images based on visual saliency
    Yu, Yongwei
    Yin, Guofu
    Yin, Ying
    Du, Liuqing
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2015, 46 (07): : 365 - 371
  • [33] A MULTI-CUES BASED APPROACH FOR VISUAL SALIENCY DETECTION
    Zhang, Qiaorong
    Wang, Yingfeng
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2021, 17 (04): : 1435 - 1446
  • [34] ROAD SIGN DETECTION BASED ON VISUAL SALIENCY AND SHAPE ANALYSIS
    Zhang, Tao
    Lv, Jingqin
    Yang, Jie
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 3667 - 3670
  • [35] Fabric Defect Detection Based on Visual Saliency Map and SVM
    Zhang, Hao
    Hu, Jiajuan
    He, Zhiyong
    2017 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND APPLICATIONS (ICCIA), 2017, : 322 - 326
  • [36] Visual saliency detection based on homology similarity and an experimental evaluation
    Chen, Zhihui
    Wang, Hanzi
    Zhang, Liming
    Yan, Yan
    Liao, Hong-Yuan Mark
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 40 : 251 - 264
  • [37] Visual saliency-based vehicle logo region detection
    Zhao, Yao
    Zhou, Fuqiang
    XinmingWang
    PROCEEDINGS OF THE 2015 5TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCES AND AUTOMATION ENGINEERING, 2016, 42 : 510 - 514
  • [38] Visual saliency detection with center shift
    Yang, Weibin
    Tang, Yuan Yan
    Fang, Bin
    Shang, Zhaowei
    Lin, Yuewei
    NEUROCOMPUTING, 2013, 103 : 63 - 74
  • [39] Automatic object detection by visual saliency
    Shen, Lihua
    Journal of Convergence Information Technology, 2012, 7 (12) : 247 - 255
  • [40] A Saliency Based Approach for Foreground Extraction from a Video
    Ahsan, Sk Md Masudul
    Nafew, Abu Naser Md
    Amit, Rifat Haque
    2017 3RD INTERNATIONAL CONFERENCE ON ELECTRICAL INFORMATION AND COMMUNICATION TECHNOLOGY (EICT 2017), 2017,