Robust surface reconstruction from highly noisy point clouds using distributed elastic networks

被引:0
作者
Zhenghua Zhou
机构
[1] China Jiliang University,Department of Information Sciences and Mathematics
来源
Neural Computing and Applications | 2020年 / 32卷
关键词
Surface reconstruction; Random weights network; Elastic regularization; Sparsity; Distributed ADMM;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper, a novel distributed elastic random weights network (DERWN) is proposed to achieve robust surface reconstruction from highly noisy point clouds sampled from real surface. The designed elastic regularization with l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_{1}$$\end{document} and l2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_{2}$$\end{document} penalty items makes the network more resilient to noise and effectively capture the intrinsic shape of surface. Sparsity constraints of output weight vectors and threshold-based nodes removal are conducive to determining appropriate number of hidden nodes of network and optimizing the distribution of hidden nodes. The distributed optimization manner in DERWN on the basis of alternating direction method of multipliers solves the problem that traditional RWN learning algorithm suffers from the limitation of memory with large-scale data. The proposed DERWN achieves a solution to global problem by solving local subproblems coordinately. Experimental results show that the proposed DERWN algorithm can robustly reconstruct the unknown surface in case of highly noisy data with satisfying accuracy and smoothness.
引用
收藏
页码:14459 / 14470
页数:11
相关论文
共 67 条
[11]  
Liang SF(2009)Interpolation and rate of convergence for a class of neural networks Appl Math Model 33 1441-1456
[12]  
Hoppe H(2006)Extreme learning machine: theory and applications Neurocomputing 70 489-501
[13]  
Derose T(1995)Stochastic choice of basis functions in adaptive function approximation and the functional-link net IEEE Trans Neural Netw 6 1320-1329
[14]  
Duchamp T(1994)Learning and generalization characteristics of the random vector functional-link net Neurocomputing 6 163-180
[15]  
Mcdonald J(2015)A local learning algorithm for random weights networks Knowl Based Syst 74 159-166
[16]  
Stuetzle W(2014)Fast decorrelated neural network ensembles with random weights Inf Sci 264 104-117
[17]  
Ohtake Y(2010)Distributed optimization and statistical learning via the alternating direction method of multipliers Found Trends Mach Learn 3 1-122
[18]  
Belyaev A(1991)Approximation capabilities of multilayer feedforward networks Neural Netw 4 251-257
[19]  
Alexa M(1998)Gradient-based learning applied to document recognition Proc IEEE 86 2278-2324
[20]  
Turk G(1964)Function minimization by conjugate gradients Comput J 7 149-154