Semantic Image Matting: General and Specific Semantics

被引:0
|
作者
Yanan Sun
Chi-Keung Tang
Yu-Wing Tai
机构
[1] HKUST,
[2] Dartmouth College,undefined
来源
International Journal of Computer Vision | 2024年 / 132卷
关键词
Image matting; Semantics; Classification; Class-specific matting;
D O I
暂无
中图分类号
学科分类号
摘要
Although conventional matting formulation can separate foreground from background in fractional occupancy which can be caused by highly transparent objects, complex foreground (e.g., net or tree), and objects containing very fine details (e.g., hairs), no previous work has attempted to reason the underlying causes of matting due to various foreground semantics in general. We show how to obtain better alpha mattes by incorporating into our framework semantic classification of matting regions. Specifically, we consider and learn 20 classes of general matting patterns, and propose to extend the conventional trimap to semantic trimap. The proposed semantic trimap can be obtained automatically through patch structure analysis within trimap regions. Meanwhile, we learn a multi-class discriminator to regularize the alpha prediction at semantic level, and content-sensitive weights to balance different regularization losses. Experiments on multiple benchmarks show that our method outperforms other methods benefit from such general alpha semantics and has achieved the most competitive state-of-the-art performance. We further explore the effectiveness of our method on specific semantics by specializing our method into human matting and transparent object matting. Experimental results on specific semantics demonstrate alpha matte semantic information can boost performance for not only general matting but also class-specific matting. Finally, we contribute a large-scale Semantic Image Matting Dataset constructed with careful consideration of data balancing across different semantic classes. Code and dataset are available in https://github.com/nowsyn/SIM.
引用
收藏
页码:710 / 730
页数:20
相关论文
共 50 条
  • [21] Multimodal Image Fusion Method Based on Multiscale Image Matting
    Maqsood, Sarmad
    Damasevicius, Robertas
    Silka, Jakub
    Wozniak, Marcin
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING (ICAISC 2021), PT II, 2021, 12855 : 57 - 68
  • [22] Automatic Image Matting Using Component-Hue-Difference-Based Spectral Matting
    Hu, Wu-Chih
    Hsu, Jung-Fu
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS (ACIIDS 2012), PT II, 2012, 7197 : 148 - 157
  • [23] A Survey on Pre-Processing in Image Matting
    Gui-Lin Yao
    Journal of Computer Science and Technology, 2017, 32 : 122 - 138
  • [24] Text-Guided Portrait Image Matting
    Xu Y.
    Yao X.
    Liu B.
    Quan Y.
    Ji H.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (08): : 4149 - 4162
  • [25] A Hierarchical Framework on Affinity Based Image Matting
    Yao G.-L.
    Zhao Z.-J.
    Su X.-D.
    Xin H.-T.
    Hu W.
    Qin X.-L.
    Zidonghua Xuebao/Acta Automatica Sinica, 2021, 47 (01): : 209 - 223
  • [26] Deep Learning Methods in Image Matting: A Survey
    Huang, Lingtao
    Liu, Xipeng
    Wang, Xuelin
    Li, Jiangqi
    Tan, Benying
    APPLIED SCIENCES-BASEL, 2023, 13 (11):
  • [27] Deep Image Matting With Sparse User Interactions
    Wei, Tianyi
    Chen, Dongdong
    Zhou, Wenbo
    Liao, Jing
    Zhao, Hanqing
    Zhang, Weiming
    Hua, Gang
    Yu, Nenghai
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (02) : 881 - 895
  • [28] Image matting in the perception granular deep learning
    Hu, Hong
    Pang, Liang
    Shi, Zhongzhi
    KNOWLEDGE-BASED SYSTEMS, 2016, 102 : 51 - 63
  • [29] Sampling Propagation Attention With Trimap Generation Network for Natural Image Matting
    Zhou, Yuhongze
    Zhou, Liguang
    Lam, Tin Lun
    Xu, Yangsheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (10) : 5828 - 5843
  • [30] Natural Image Matting with Attended Global Context
    Yi-Yi Zhang
    Li Niu
    Yasushi Makihara
    Jian-Fu Zhang
    Wei-Jie Zhao
    Yasushi Yagi
    Li-Qing Zhang
    Journal of Computer Science and Technology, 2023, 38 : 659 - 673