Segmentation of underwater object in videos

被引:0
作者
Zhu, Yuemei [1 ]
Song, Yan [1 ]
Zhang, Xin [1 ]
Lv, Pengfei [1 ]
Li, Guangliang [1 ]
He, Bo [1 ]
Yan, Tianhong [2 ]
机构
[1] Ocean Univ China, Sch Informat Sci & Engn, Qingdao, Shandong, Peoples R China
[2] China Jiliang Univ, Sch Mech & Elect Engn, Hangzhou, Zhejiang, Peoples R China
来源
2018 OCEANS - MTS/IEEE KOBE TECHNO-OCEANS (OTO) | 2018年
基金
中国国家自然科学基金;
关键词
Segmentation; underwater video; optical flow;
D O I
暂无
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
Video segmentation is a necessary step for object tracking. Existing methods that are used to extract object from the background based on an intensive sequence of searching all across the frames, thus this process performs lots of searching works result with low efficiency, whereas other methods obtain segmentation by clustering pixels which resulting in over-segmentation. Inspired by breakthroughs in semantic segmentation, in this paper, we propose to combine appearance and dynamic cues, which is a common conception and plays a key role in successfully segmenting objects in videos. To implement this idea, we combine Deep Convolutional Neural Network (DCNN) and optical flow information of two continuous frames. To overcome the difficulty of segmentation of underwater object in videos induced by the presence of different types of suspension particle from like the water droplets and dust particles to the poor lighting and over lighting conditions, In this work, Contrast-Limited Adaptive Histogram Equalization (CLAHE) and a simple color resign method are used to enhance details and reduce greenish and bluish effects. Some DCNN variants are applied to semantic segmentation and achieve great efficiency. Specifically, because DCNN can obtain different spatial scale information, as a DCNN variant, DeepLab gets a good performance in semantic segmentation. By using atrous convolution, DeepLab network's filters can observe greater receptive field without reducing the feature map dimension, therefore this structure keeps global and position information. Consequently, we compromises above mentioned methods, The optical flow estimation is carried out on the image processed by the CLAHE method, and the accurate segmentation results are obtained by using the DeepLab network. Experiments show good performance of our method.
引用
收藏
页数:4
相关论文
共 26 条
  • [1] [Anonymous], 2017, ARXIV170105384
  • [2] [Anonymous], 2016, ARXIV160600915
  • [3] [Anonymous], MOTION COHERENT TRAC
  • [4] Video SnapCut: Robust Video Object Cutout Using Localized Classifiers
    Bai, Xue
    Wang, Jue
    Simons, David
    Sapiro, Guillermo
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2009, 28 (03):
  • [5] Brox T, 2010, LECT NOTES COMPUT SC, V6315, P282, DOI 10.1007/978-3-642-15555-0_21
  • [6] Fathi A., 2011, Combining self training and active learning for video segmentation
  • [7] Galasso Fabio., 2012, ACCV, P760
  • [8] Goel G, 2015, INT J RES EMER SCI T, V2, P6
  • [9] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [10] Irani Michal, 2014, BMVC