Using GPUs for machine learning algorithms

被引:146
作者
Steinkraus, D [1 ]
Buck, I [1 ]
Simard, PY [1 ]
机构
[1] Microsoft Res, Redmond, WA 98056 USA
来源
EIGHTH INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION, VOLS 1 AND 2, PROCEEDINGS | 2005年
关键词
D O I
10.1109/ICDAR.2005.251
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Using dedicated hardware to do machine learning typically ends up in disaster because of cost, obsolescence, and poor software. The popularization of Graphic Processing Units (GPVs), which are now available on every PC, provides an attractive alternative. We propose a generic 2-layer fully connected neural network GPU implementation which yields over 3X speedup for both training and testing with respect to a 3GHz P4 CPU.
引用
收藏
页码:1115 / 1120
页数:6
相关论文
共 5 条
[1]   Sparse matrix solvers on the GPU:: Conjugate gradients and multigrid [J].
Bolz, J ;
Farmer, I ;
Grinspun, E ;
Schröder, P .
ACM TRANSACTIONS ON GRAPHICS, 2003, 22 (03) :917-924
[2]   Linear algebra operators for GPU implementation of numerical algorithms [J].
Krüger, J ;
Westermann, R .
ACM TRANSACTIONS ON GRAPHICS, 2003, 22 (03) :908-916
[3]  
MACEDONIA M, 2003, IEEE COMPUT, P106
[4]  
Purcell TJ, 2002, ACM T GRAPHIC, V21, P703, DOI 10.1145/566570.566640
[5]  
Simard PY, 2003, PROC INT CONF DOC, P958