Semantic Instance Meets Salient Object: Study on Video Semantic Salient Instance Segmentation

被引:10
作者
Trung-Nghia Le [1 ,3 ]
Sugimoto, Akihiro [2 ]
机构
[1] Grad Univ Adv Studies SOKENDAI, Hayama, Japan
[2] Natl Inst Informat, Tokyo, Japan
[3] Kyushu Univ, Fukuoka, Fukuoka, Japan
来源
2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2019年
关键词
TRACKING;
D O I
10.1109/WACV.2019.00194
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Focusing on only semantic instances that only salient in a scene gains more benefits for robot navigation and self-driving cars than looking at all objects in the whole scene. This paper pushes the envelope on salient regions in a video to decompose them into semantically meaningful components, namely, semantic salient instances. We provide the baseline for the new task of video semantic salient instance segmentation (VSSIS), that is, Semantic Instance - Salient Object (SISO) framework. The SISO framework is simple yet efficient, leveraging advantages of two different segmentation tasks, i.e. semantic instance segmentation and salient object segmentation to eventually fuse them for the final result. In SISO, we introduce a sequential fusion by looking at overlapping pixels between semantic instances and salient regions to have non-overlapping instances one by one. We also introduce a recurrent instance propagation to refine the shapes and semantic meanings of instances, and an identity tracking to maintain both the identity and the semantic meaning of instances over the entire video. Experimental results demonstrated the effectiveness of our SISO baseline, which can handle occlusions in videos. In addition, to tackle the task of VSSIS, we augment the DAVIS-2017 benchmark dataset by assigning semantic ground-truth for salient instance labels, obtaining SEmantic Salient Instance Video (SESIV) dataset. Our SESIV dataset consists of 84 high-quality video sequences with pixel-wisely per-frame ground-truth labels.
引用
收藏
页码:1779 / 1788
页数:10
相关论文
共 55 条
  • [1] [Anonymous], 2017, CVPR WORKSHOPS
  • [2] [Anonymous], 2017, CVPR
  • [3] Salient Object Detection: A Benchmark
    Borji, Ali
    Sihite, Dicky N.
    Itti, Laurent
    [J]. COMPUTER VISION - ECCV 2012, PT II, 2012, 7573 : 414 - 429
  • [4] Brox T, 2010, LECT NOTES COMPUT SC, V6315, P282, DOI 10.1007/978-3-642-15555-0_21
  • [5] One-Shot Video Object Segmentation
    Caelles, S.
    Maninis, K. -K.
    Pont-Tuset, J.
    Leal-Taixe, L.
    Cremers, D.
    Van Gool, L.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5320 - 5329
  • [6] MaskLab: Instance Segmentation by Refining Object Detection with Semantic and Direction Features
    Chen, Liang-Chieh
    Hermans, Alexander
    Papandreou, George
    Schroff, Florian
    Wang, Peng
    Adam, Hartwig
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4013 - 4022
  • [7] Cheng J., 2017, CVPR WORKSH
  • [8] SalientShape: group saliency in image collections
    Cheng, Ming-Ming
    Mitra, Niloy J.
    Huang, Xiaolei
    Hu, Shi-Min
    [J]. VISUAL COMPUTER, 2014, 30 (04) : 443 - 453
  • [9] PoTion: Pose MoTion Representation for Action Recognition
    Choutas, Vasileios
    Weinzaepfel, Philippe
    Revaud, Jerome
    Schmid, Cordelia
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7024 - 7033
  • [10] Dai J., 2016, ADV NEURAL INFORM PR, P379, DOI DOI 10.1109/CVPR.2017.690