Parallel approaches to machine learning - A comprehensive survey

被引:51
作者
Upadhyaya, Sujatha R. [1 ]
机构
[1] Infosys Technol, Bangalore, Karnataka, India
关键词
Distributed and parallel machine learning; GPU; Map reduce;
D O I
10.1016/j.jpdc.2012.11.001
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Literature has always witnessed efforts that make use of parallel algorithms / parallel architecture to improve performance; machine learning space is no exception. In fact, a considerable effort has gone into this area in the past fifteen years. Our report attempts to bring together and consolidate such attempts. It tracks the development in this area since the inception of the idea in 1995, identifies different phases during the time period 1995-2011 and marks important achievements. When it comes to performance enhancement, GPU platforms have carved a special niche for themselves. The strength of these platforms comes from the capability to speed up computations exponentially by way of parallel architecture / programming methods. While it is evident that computationally complex processes like image processing, gaming etc. stand to gain much from parallel architectures; studies suggest that general purpose tasks such as machine learning, graph traversal, and finite state machines are also identified as the parallel applications of the future. Map reduce is another important technique that has evolved during this period and as the literature has it, it has been proved to be an important aid in delivering performance of machine learning algorithms on GPUs. The report summarily presents the path of developments. (C) 2012 Elsevier Inc. All rights reserved.
引用
收藏
页码:284 / 292
页数:9
相关论文
共 88 条
[1]  
Agrawal R., 1996, ADV KNOWLEDGE DISCOV
[2]  
Agrawal R., 1996, IEEE TKDE
[3]  
[Anonymous], 1 INT C INT APPL SER
[4]  
[Anonymous], 2008, HKUSTCS0807
[5]  
[Anonymous], 12 WSEAS INT C COMP
[6]  
[Anonymous], TECHNICAL REPORT
[7]  
[Anonymous], IEEE T NEURAL NETWOR
[8]  
[Anonymous], 2006, NIPS
[9]  
[Anonymous], NIPS
[10]  
Ben-Haim Y., 2010, J MACHINE LEARNING