SPFUSIONNET: SKETCH SEGMENTATION USING MULTI-MODAL DATA FUSION

被引:15
|
作者
Wang, Fei [1 ]
Lin, Shujin [4 ]
Wu, Hefeng [2 ]
Li, Hanhui [3 ]
Wang, Ruomei [1 ]
Luo, Xiaonan [3 ]
He, Xiangjian [5 ]
机构
[1] Sun Yat Sen Univ, Natl Engn Res Ctr Digital Life, Guangzhou, Guangdong, Peoples R China
[2] Guangdong Univ Foreign Studies, Sch Informat Sci & Technol, Guangzhou, Guangdong, Peoples R China
[3] Guilin Univ Elect Technol, Sch Comp Sci & Informat Secur, Guilin, Peoples R China
[4] Sun Yat Sen Univ, Sch Commun & Design, Guangzhou, Guangdong, Peoples R China
[5] Univ Technol Sydney, Global Big Data Technol Ctr, Sydney, NSW, Australia
来源
2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME) | 2019年
基金
中国国家自然科学基金;
关键词
sketch segmentation; multi-modal fusion; deep neural network;
D O I
10.1109/ICME.2019.00285
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The sketch segmentation problem remains largely unsolved because conventional methods are greatly challenged by the highly abstract appearances of freehand sketches and their numerous shape variations. In this work, we tackle such challenges by exploiting different modes of sketch data in a unified framework. Specifically, we propose a deep neural network SPFusionNet to capture the characteristic of sketch by fusing from its image and point set modes. The imagemodal component SketchNet learns hierarchically abstract robust features and utilizes multi-level representations to produce pixel-wise feature maps, while the point set-modal component SPointNet captures local and global contexts of the sampled point set to produce point-wise feature maps. Then our framework aggregates these feature maps by a fusion network component to generate the sketch segmentation result. The extensive experimental evaluation and comparison with peer methods on our large SketchSeg dataset verify the effectiveness of the proposed framework.
引用
收藏
页码:1654 / 1659
页数:6
相关论文
共 50 条
  • [1] Soft multi-modal data fusion
    Coppock, S
    Mazack, L
    PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1 AND 2, 2003, : 636 - 641
  • [2] Multi-modal data fusion: A description
    Coppock, S
    Mazlack, LJ
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2004, 3214 : 1136 - 1142
  • [3] Multi-Modal Data Fusion for Big Events
    Papacharalapous, A. E.
    Hovelynck, Stefan
    Cats, O.
    Lankhaar, J. W.
    Daamen, W.
    van Oort, N.
    van Lint, J. W. C.
    IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2015, 7 (04) : 5 - 10
  • [4] EFFECTIVE FUSION OF MULTI-MODAL DATA WITH GROUP CONVOLUTIONS FOR SEMANTIC SEGMENTATION OF AERIAL IMAGERY
    Chen, Kaiqiang
    Fu, Kun
    Gao, Xin
    Yan, Menglong
    Zhang, Wenkai
    Zhang, Yue
    Sun, Xian
    2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2019), 2019, : 3911 - 3914
  • [5] Multi-modal Fusion
    Liu, Huaping
    Hussain, Amir
    Wang, Shuliang
    INFORMATION SCIENCES, 2018, 432 : 462 - 462
  • [6] Adherent Peanut Image Segmentation Based on Multi-Modal Fusion
    Wang, Yujing
    Ye, Fang
    Zeng, Jiusun
    Cai, Jinhui
    Huang, Wangsen
    SENSORS, 2024, 24 (14)
  • [7] Application of Multi-modal Fusion Attention Mechanism in Semantic Segmentation
    Liu, Yunlong
    Yoshie, Osamu
    Watanabe, Hiroshi
    COMPUTER VISION - ACCV 2022, PT VII, 2023, 13847 : 378 - 397
  • [8] Flexible Fusion Network for Multi-Modal Brain Tumor Segmentation
    Yang, Hengyi
    Zhou, Tao
    Zhou, Yi
    Zhang, Yizhe
    Fu, Huazhu
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (07) : 3349 - 3359
  • [9] Multi-modal Action Segmentation in the Kitchen with a Feature Fusion Approach
    Kogure, Shunsuke
    Aoki, Yoshimitsu
    FIFTEENTH INTERNATIONAL CONFERENCE ON QUALITY CONTROL BY ARTIFICIAL VISION, 2021, 11794
  • [10] FUSION OF MULTI-MODAL NEUROIMAGING DATA AND ASSOCIATION WITH COGNITIVE DATA
    LoPresto, Mark D.
    Akhonda, M. A. B. S.
    Calhoun, Vince D.
    Adali, Tülay
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,