A Survey of Machine Learning for Computer Architecture and Systems

被引:27
作者
Wu, Nan [1 ]
Xie, Yuan [1 ]
机构
[1] Univ Calif Santa Barbara, Dept Elect & Comp Engn, Santa Barbara, CA 93106 USA
关键词
Machine learning for computer architecture; machine learning for systems; NEURAL-NETWORKS; DESIGN SPACE; PERFORMANCE; PREDICTION; MODEL; CLASSIFICATION; ACCURATE; CLOUD;
D O I
10.1145/3494523
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
It has been a long time that computer architecture and systems are optimized for efficient execution of machine learning (ML) models. Now, it is time to reconsider the relationship between ML and systems and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: improvement of designers' productivity and completion of the virtuous cycle. In this article, we present a comprehensive review of the work that applies ML for computer architecture and system design. First, we perform a high-level taxonomy by considering the typical role that ML techniques take in architecture/system design, i.e., either for fast predictive modeling or as the design methodology. Then, we summarize the common problems in computer architecture/system design that can be solved by ML techniques and the typical ML techniques employed to resolve each of them. In addition to emphasis on computer architecture in a narrow sense, we adopt the concept that data centers can be recognized as warehouse-scale computers; sketchy discussions are provided in adjacent computer systems, such as code generation and compiler; we also give attention to how ML techniques can aid and transform design automation. We further provide a future vision of opportunities and potential directions and envision that applying ML for computer architecture and systems would thrive in the community.
引用
收藏
页数:39
相关论文
共 274 条
  • [1] Integrated CPU and L2 cache voltage scaling using machine learning
    AbouGhazaleh, Nevine
    Ferreira, Alexandre
    Rusu, Cosmin
    Xu, Ruibin
    Liberato, Frank
    Childers, Bruce
    Mosse, Daniel
    Melhem, Rami
    [J]. ACM SIGPLAN NOTICES, 2007, 42 (07) : 41 - 49
  • [2] Addanki Ravichandra, 2018, P NIPS MACHINE LEARN
  • [3] Agarwal Nitish, 2019, P INT WORKSHOP AI AS
  • [4] Efficient Hierarchical Performance Modeling for Integrated Circuits via Bayesian Co-Learning
    Alawieh, Mohamad
    Wang, Fa
    Li, Xin
    [J]. PROCEEDINGS OF THE 2017 54TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2017,
  • [5] Alawieh MB, 2020, ASIA S PACIF DES AUT, P26, DOI 10.1109/ASP-DAC47756.2020.9045178
  • [6] A Survey of Machine Learning for Big Code and Naturalness
    Allamanis, Miltiadis
    Barr, Earl T.
    Devanbu, Premkumar
    Sutton, Charles
    [J]. ACM COMPUTING SURVEYS, 2018, 51 (04)
  • [7] AN INTRODUCTION TO KERNEL AND NEAREST-NEIGHBOR NONPARAMETRIC REGRESSION
    ALTMAN, NS
    [J]. AMERICAN STATISTICIAN, 1992, 46 (03) : 175 - 185
  • [8] Low-Power, High-Performance Analog Neural Branch Prediction
    Amant, Renee St.
    Jimenez, Daniel A.
    Burger, Doug
    [J]. 2008 PROCEEDINGS OF THE 41ST ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE: MICRO-41, 2008, : 447 - +
  • [9] [Anonymous], 2018, Synthes. Lect. Comput. Archit.
  • [10] Ardalani N, 2019, Arxiv, DOI arXiv:1906.07840