DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

被引:0
作者
Crane, Rixon [1 ]
Roosta, Fred [1 ]
机构
[1] Univ Queensland, Brisbane, Qld, Australia
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019) | 2019年 / 32卷
基金
澳大利亚研究理事会;
关键词
STOCHASTIC ALGORITHMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For optimization of a large sum of functions in a distributed computing environment, we present a novel communication efficient Newton-type algorithm that enjoys a variety of advantages over similar existing methods. Our algorithm, DINGO, is derived by optimization of the gradient's norm as a surrogate function. DINGO does not impose any specific form on the underlying functions and its application range extends far beyond convexity and smoothness. The underlying sub-problems of DINGO are simple linear least-squares, for which a plethora of efficient algorithms exist. DINGO involves a few hyper-parameters that are easy to tune and we theoretically show that a strict reduction in the surrogate objective is guaranteed, regardless of the selected hyper-parameters.
引用
收藏
页数:11
相关论文
共 25 条
  • [21] Distributed Newton Method for Large-Scale Consensus Optimization
    Tutunov, Rasul
    Bou-Ammar, Haitham
    Jadbabaie, Ali
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2019, 64 (10) : 3983 - 3994
  • [22] Wang Shusen, 2018, ADV NEURAL INFORM PR, P23382348
  • [23] Xing HQ, 2016, INT WORKS EARTH OB
  • [24] Ye Nan, 2017, MATRIX BOOK SERIES, V2
  • [25] Zhang YC, 2015, PR MACH LEARN RES, V37, P362