Scaling Multinomial Logistic Regression via Hybrid Parallelism

被引:3
作者
Raman, Parameswaran [1 ]
Srinivasan, Sriram [1 ]
Matsushima, Shin [2 ]
Zhang, Xinhua [3 ]
Yun, Hyokun [4 ]
Vishwanathan, S. V. N. [4 ]
机构
[1] Univ Calif Santa Cruz, Santa Cruz, CA 95064 USA
[2] Univ Tokyo, Tokyo, Japan
[3] Univ Illinios, Chicago, IL USA
[4] Amazon, Seattle, WA USA
来源
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING | 2019年
基金
美国国家科学基金会;
关键词
Multinomial Logistic Regression; Stochastic Optimization; Large Scale Machine Learning;
D O I
10.1145/3292500.3330837
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study the problem of scaling Multinomial Logistic Regression (MLR) to datasets with very large number of data points in the presence of large number of classes. At a scale where neither data nor the parameters are able to fit on a single machine, we argue that simultaneous data and model parallelism (Hybrid Parallelism) is inevitable. The key challenge in achieving such a form of parallelism in MLR is the log-partition function which needs to be computed across all K classes per data point, thus making model parallelism non-trivial. To overcome this problem, we propose a reformulation of the original objective that exploits double-separability, an attractive property that naturally leads to hybrid parallelism. Our algorithm (DS-MLR) is asynchronous and completely de-centralized, requiring minimal communication across workers while keeping both data and parameter workloads partitioned. Unlike standard data parallel approaches, DS-MLR avoids bulk-synchronization by maintaining local normalization terms on each worker and accumulating them incrementally using a token-ring topology. We demonstrate the versatility of DS-MLR under various scenarios in data and model parallelism, through an empirical study consisting of real-world datasets. In particular, to demonstrate scaling via hybrid parallelism, we created a new benchmark dataset (Reddit-Full) by pre-processing 1.7 billion reddit user comments spanning the period 2007-2015. We used DS-MLR to solve an extreme multi-class classification(1) problem of classifying 211 million data points into their corresponding subreddits. Reddit-Full is a massive data set with data occupying 228 GB and 44 billion parameters occupying 358 GB. To the best of our knowledge, no other existing methods can handle MLR in this setting.
引用
收藏
页码:1460 / 1470
页数:11
相关论文
共 32 条
[1]  
[Anonymous], 2017, ARXIV171005080
[2]  
[Anonymous], 2013, INTRO LECT CONVEX OP, DOI DOI 10.1007/978-1-4419-8853-9
[3]  
[Anonymous], 2010, P COMPSTAT 2010
[4]  
[Anonymous], 2007, NIPS
[5]  
Baker YS, 2013, 2013 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SECURITY INFORMATICS: BIG DATA, EMERGENT THREATS, AND DECISION-MAKING IN SECURITY INFORMATICS, P10, DOI 10.1109/ISI.2013.6578776
[6]  
Bertsekas D. P., 2010, OPTIMIZATION MACHINE, V2011, P3
[7]  
Bouchard Guillaume., 2007, EFFICIENT BOUNDS SOF
[8]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[9]  
Boyd Stephen P., 2014, Convex Optimization
[10]  
Davidson J., 2010, Proceedings of the fourth ACM conference on Recommender systems, P293, DOI [DOI 10.1145/1864708.1864770, 10.1145/1864708.1864770]