共 19 条
[1]
Tseng P., Approximation accuracy, gradient methods, and error bound for structured convex optimization, Mathematical Programming, 125, 2, pp. 263-295, (2010)
[2]
Nemirovski A., Juditsky A., Lan G., Shapiro A., Robust stochastic approximation approach to stochastic programming, SIAM Journal on Optimization, 19, 4, pp. 1574-1609, (2009)
[3]
Shalev-Shwartz S., Tewari A., Stochastic methods for L1 regularized loss minimization, Proc. of the 26th Annual Int'l Conf. on Machine Learning, pp. 929-936, (2009)
[4]
Johnson R., Zhang T., Accelerating stochastic gradient descent using predictive variance reduction, Proc. of the Advances in Neural Information Processing Systems, 26, pp. 315-323, (2013)
[5]
Shalev-Shwartz S., Zhang T., Stochastic dual coordinate ascent methods for regularized loss minimization, (2012)
[6]
Le Roux N., Schmidt M., Bach F., A stochastic gradient method with an exponential convergence rate for strongly convex optimization with finite training sets, (2012)
[7]
Xiao L., Dual averaging methods for regularized stochastic learning and online optimization, Advances in Neural Information Processing Systems, pp. 2116-2124, (2009)
[8]
Xiao L., Zhang T., A proximal stochastic gradient method with progressive variance reduction, (2014)
[9]
Duchi J., Shalev-Shwartz S., Singer Y., Tewari A., Composite objective mirror descent, Proc. of the 23rd Annual Workshop on Computational Learning Theory, pp. 116-128, (2010)
[10]
Duchi J., Shalev-Shwartz S., Singer Y., Efficient projections onto the L1-ball for learning in high dimensions, Proc. of the 25th Int'l Conf. on Machine Learning, pp. 272-279, (2008)