Quantized incremental algorithms for distributed optimization

被引:281
作者
Rabbat, MG [1 ]
Nowak, RD [1 ]
机构
[1] Univ Wisconsin, Dept Elect & Comp Engn, Madison, WI 53706 USA
基金
美国国家科学基金会;
关键词
distributed algorithms; energy-accuracy tradeoff; gradient methods; wireless sensor networks;
D O I
10.1109/JSAC.2005.843546
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Wireless sensor networks are capable of collecting an enormous amount of data. Often, the ultimate objective is to estimate a parameter or function from these data, and such estimators are typically the solution of an optimization problem (e.g., maximum likelihood, minimum mean-squared error, or maximum a posteriori). This paper investigates a general class of distributed optimization algorithms for "in-network" data processing, aimed at reducing the amount of energy and bandwidth used for communication. Our intuition tells us that processing the data in-network should, in general, require less energy than transmitting all of the data to a fusion center. In this paper, we address the questions: When, in fact, does in-network processing use less energy, and how much energy is saved.? The proposed distributed algorithms are based on incremental optimization methods. A parameter estimate,is circulated through the network, and along the way each node makes a small gradient descent-like adjustment to the estimate based only on its local data. Applying results from the theory of incremental subgradient optimization, we find that the distributed algorithms converge to an approximate solution for a broad class of problems. We extend these results to the case where the optimization variable is quantized before being transmitted to the next node and find that quantization does not affect the rate of convergence. Bounds on the number of incremental steps required for a certain level of accuracy provide insight into the tradeoff between estimation performance and communication overhead. Our main conclusion is that as the number of sensors in the network grows, in-network processing will always use less energy than a centralized algorithm, while maintaining a desired level of accuracy.
引用
收藏
页码:798 / 808
页数:11
相关论文
共 12 条
[1]  
CHAKRABARTI A, 2004, INF PROCESS SENSOR N
[2]  
Doherty L., 2001, International Journal of Parallel and Distributed Systems & Networks, V4, P121
[3]  
GUPTA P, 1998, STOCHASTIC ANAL CONT, P1106
[4]  
Huber P. J., 1981, ROBUST STAT
[5]  
KIBARDIN VM, 1979, AUTOMAT REM CONTR+, V40, P1311
[6]  
LEVY E, 2004, WORKSH COMB ALG ASP
[7]  
Nedic A., 2000, STOCHASTIC OPTIMIZAT, P263
[8]  
NEDIC A, 1999, LIDSP2460
[9]  
Papadimitriou C., 1998, COMBINATORIAL OPTIMI
[10]  
PETIT J, 2001, P 2 INT WORKSH APPR