A fully convolutional two-stream fusion network for interactive image segmentation

被引:74
作者
Hu, Yang [1 ]
Soltoggio, Andrea [1 ]
Lock, Russell [1 ]
Carter, Steve [2 ]
机构
[1] Loughborough Univ, Loughborough, Leics, England
[2] ICE Agcy, Poole, Dorset, England
基金
“创新英国”项目;
关键词
Interactive image segmentation; Fully convolutional network; Two-stream network;
D O I
10.1016/j.neunet.2018.10.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a novel fully convolutional two-stream fusion network (FCTSFN) for interactive image segmentation. The proposed network includes two sub-networks: a two-stream late fusion network (TSLFN) that predicts the foreground at a reduced resolution, and a multi-scale refining network (MSRN) that refines the foreground at full resolution. The TSLFN includes two distinct deep streams followed by a fusion network. The intuition is that, since user interactions are more direct information on foreground/background than the image itself, the two-stream structure of the TSLFN reduces the number of layers between the pure user interaction features and the network output, allowing the user interactions to have a more direct impact on the segmentation result. The MSRN fuses the features from different layers of TSLFN with different scales, in order to seek the local to global information on the foreground to refine the segmentation result at full resolution. We conduct comprehensive experiments on four benchmark datasets. The results show that the proposed network achieves competitive performance compared to current state-of-the-art interactive image segmentation methods.(1) (c) 2018 Elsevier Ltd. All rights reserved.
引用
收藏
页码:31 / 42
页数:12
相关论文
共 50 条
[31]   Click-Pixel Cognition Fusion Network With Balanced Cut for Interactive Image Segmentation [J].
Lin, Jiacheng ;
Xiao, Zhiqiang ;
Wei, Xiaohui ;
Duan, Puhong ;
He, Xuan ;
Dian, Renwei ;
Li, Zhiyong ;
Li, Shutao .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 :177-190
[32]   Progressive medical image annotation with convolutional neural network-based interactive segmentation method [J].
Bai, Yunkun ;
Sun, Guangmin ;
Li, Yu ;
Le Shen ;
Li Zhang .
MEDICAL IMAGING 2021: IMAGE PROCESSING, 2021, 11596
[33]   TWO-STREAM MULTI-TASK NETWORK FOR FASHION RECOGNITION [J].
Li, Peizhao ;
Li, Yanjing ;
Jiang, Xiaolong ;
Zhen, Xiantong .
2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, :3038-3042
[34]   Fake visual content detection using two-stream convolutional neural networks [J].
Bilal Yousaf ;
Muhammad Usama ;
Waqas Sultani ;
Arif Mahmood ;
Junaid Qadir .
Neural Computing and Applications, 2022, 34 :7991-8004
[35]   A Two-Stream Deep-Learning Network for Heart Rate Estimation From Facial Image Sequence [J].
Lie, Wen-Nung ;
Le, Dao Q. ;
Huang, Po-Han ;
Fu, Guan-Hao ;
Quynh, Anh Nguyen Thi ;
Nhu, Quynh Nguyen Quang .
IEEE SENSORS JOURNAL, 2024, 24 (24) :42343-42351
[36]   Document images forgery localization using a two-stream network [J].
Xu, Wenbo ;
Luo, Junwei ;
Zhu, Chuntao ;
Lu, Wei ;
Zeng, Jinhua ;
Shi, Shaopei ;
Lin, Cong .
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (08) :5272-5289
[37]   Locate and Verify: A Two-Stream Network for Improved Deepfake Detection [J].
Shuai, Chao ;
Zhong, Jieming ;
Wu, Shuang ;
Lin, Feng ;
Wang, Zhibo ;
Ba, Zhongjie ;
Liu, Zhenguang ;
Cavallaro, Lorenzo ;
Ren, Kui .
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, :7131-7142
[38]   A very deep two-stream network for crowd type recognition [J].
Wei, Xinlei ;
Du, Junping ;
Xue, Zhe ;
Liang, Meiyu ;
Geng, Yue ;
Xu, Xin ;
Lee, JangMyung .
NEUROCOMPUTING, 2020, 396 :522-533
[39]   Fake visual content detection using two-stream convolutional neural networks [J].
Yousaf, Bilal ;
Usama, Muhammad ;
Sultani, Waqas ;
Mahmood, Arif ;
Qadir, Junaid .
NEURAL COMPUTING & APPLICATIONS, 2022, 34 (10) :7991-8004
[40]   Interactive Image Segmentation Based on Fusion of Two-Stage Feature and Transformer Encoder [J].
Feng, Jun ;
Zhang, Tian ;
Shi, Yichen ;
Wang, Hui ;
Hu, Jingjing .
Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2024, 36 (06) :831-843