Applying Fast Matrix Multiplication to Neural Networks

被引:5
作者
Khaled, Ahmed [1 ]
Atiya, Amir F. [1 ]
Abdel-Gawad, Ahmed H. [1 ]
机构
[1] Cairo Univ, Giza, Egypt
来源
PROCEEDINGS OF THE 35TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING (SAC'20) | 2020年
关键词
Strassen's algorithm; Winograd's algorithm; GPU matrix multiplication; Fast Matrix Multiplication; Neural Networks;
D O I
10.1145/3341105.3373852
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in deep neural networks have enabled impressive performance in computer vision, natural language processing, and other fields, yet they remain computationally very intensive to train or use. We consider the use of Winograd's Algorithm for fast matrix multiplication in feedforward neural networks and we find that speedups of 10% - 30% are possible for fully connected layers in large networks.
引用
收藏
页码:1034 / 1037
页数:4
相关论文
共 14 条
  • [1] Abadi M., 2016, ARXIVCSDC160304467
  • [2] Adelman Menachem, 2018, ARXIV180508079
  • [3] [Anonymous], 2014, ARXIV NEURAL EVOLUTI
  • [4] [Anonymous], ARXIV171203942
  • [5] Cong Jason, 2014, Artificial Neural Networks and Machine Learning - ICANN 2014. 24th International Conference on Artificial Neural Networks. Proceedings: LNCS 8681, P281, DOI 10.1007/978-3-319-11179-7_36
  • [6] Dharmajee D.T.V., 2018, J ADV RES DYNAMICAL, V10, P556
  • [7] Dumas J.-G., 2016, ARXIVCSSC161205766
  • [8] Anatomy of high-performance matrix multiplication
    Goto, Kazushige
    Van De Geijn, Robert A.
    [J]. ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2008, 34 (03):
  • [9] Huang J., 2018, ARXIVCSMS180807984
  • [10] Huang J., 2016, ARXIVCSMS160501078