Multi-column point-CNN for sketch segmentation

被引:0
作者
Wang F. [1 ]
Lin S. [2 ]
Li H. [3 ]
Wu H. [2 ,4 ]
Cai T. [5 ]
Luo X. [3 ]
Wang R. [2 ]
机构
[1] Department of Computer Science, Shantou University, Shantou
[2] Sun Yat-sen University, Guangzhou
[3] Guilin University of Electronic Technology, Guilin
[4] Guangdong University of Foreign Studies, Guangzhou
[5] Shenzhen Institute of Information Technology, Shenzhen
基金
中国国家自然科学基金;
关键词
Deep neural network; MCPNet; Sketch segmentation;
D O I
10.1016/j.neucom.2019.12.117
中图分类号
学科分类号
摘要
Traditional sketch segmentation methods mainly rely on handcrafted features and complicate models, and their performance is far from satisfactory due to the abstract representation of sketches. Recent success of Deep Neural Networks (DNNs) in related tasks suggests DNNs could be a practical solution for this problem, yet the suitable datasets for learning and evaluating DNNs are limited. To this end, we introduce SketchSeg, a large dataset consisting of 10,000 pixel-wisely labeled sketches. Besides, due to the lack of colors and textures in sketches, conventional DNNs learned on natural images are not optimal for tackling our problem. Therefore, we further propose the Multi-column Point-CNN (MCPNet), which directly takes sampled points as its input to reduce computational costs, and adopts multiple columns with different filter sizes to better capture the structures of sketches. Extensive experiments validate that the MCPNet is superior to conventional DNNs like FCN. The SketchSeg dataset is publicly available on https://drive.google.com/open?id=1OpCBvkInhxvfAHuVs-spDEppb8iXFC3C. © 2020 Elsevier B.V.
引用
收藏
页码:50 / 59
页数:9
相关论文
共 36 条
  • [1] Yu Q., Yang Y., Song Y.-Z., Xiang T., Hospedales T., (2015)
  • [2] Yu Q., Yang Y., Liu F., Song Y.-Z., Xiang T., Hospedales T.M., Sketch-a-net: a deep neural network that beats humans, Int. J. Comput. Vis., 122, 3, pp. 411-425, (2017)
  • [3] Eitz M., Hildebrand K., Boubekeur T., Alexa M., Sketch-based image retrieval: benchmark and bag-of-features descriptors, IEEE Trans. Vis. Comput. Graph., 17, 11, pp. 1624-1636, (2011)
  • [4] Eitz M., Richter R., Boubekeur T., Hildebrand K., Alexa M., Sketch-based shape retrieval, ACM Trans. Graph., 31, 4, pp. 1-10, (2012)
  • [5] Xie X., Xu K., Mitra N.J., Cohen-Or D., Gong W., Su Q., Chen B., Sketch-to-design: context-based part assembly, Computer Graphics Forum, 32, pp. 233-245, (2013)
  • [6] Wang F., Lin S., Luo X., Wu H., Wang R., Zhou F., A data-driven approach for sketch-based 3d shape retrieval via similar drawing-style recommendation, Computer Graphics Forum, 36, pp. 157-166, (2017)
  • [7] Shin H., Igarashi T., Magic canvas: interactive design of a 3-D scene prototype from freehand sketches, Proceedings of the Graphics Interface 2007, pp. 63-70, (2007)
  • [8] Xu K., Chen K., Fu H., Sun W.-L., Hu S.-M., Sketch2scene: sketch-based co-retrieval and co-placement of 3D models, ACM Trans. Graph. (TOG), 32, 4, (2013)
  • [9] Li Y., Song Y.Z., Hospedales T.M., Gong S., Free-hand sketch synthesis with deformable stroke models, Int. J. Comput. Vis., 122, 1, pp. 169-190, (2017)
  • [10] Ouyang S., Hospedales T., Song Y.-Z., Li X., Cross-modal face matching: beyond viewed sketches, Proceedings of the Asian Conference on Computer Vision, pp. 210-225, (2014)