Machine Learned Machines: Adaptive Co-optimization of Caches, Cores, and On-chip Network

被引:0
作者
Jain, Rahul [1 ]
Panda, Preeti Ranjan [1 ]
Subramoney, Sreenivas [2 ]
机构
[1] Indian Inst Technol Delhi, Dept Comp Sci & Engn, New Delhi, India
[2] Intel Technol India Pvt Ltd, Microarchitecture Res Lab, Bangalore, Karnataka, India
来源
PROCEEDINGS OF THE 2016 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE) | 2016年
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Modern multicore architectures require runtime optimization techniques to address the problem of mismatches between the dynamic resource requirements of different processes and the runtime allocation. Choosing between multiple optimizations at runtime is complex due to the non-additive effects, making the adaptiveness of the machine learning techniques useful. We present a novel method, Machine Learned Machines (MLM), by using Online Reinforcement Learning (RL) to perform dynamic partitioning of the last level cache (LLC), along with dynamic voltage and frequency scaling (DVFS) of the core and uncore (interconnection network and LLC). We show that the co-optimization results in much lower energy-delay product (EDP) than any of the techniques applied individually. The results show an average of 19.6% EDP and 2.6% execution time improvement over the baseline.
引用
收藏
页码:253 / 256
页数:4
相关论文
共 14 条
[1]  
[Anonymous], 1998, REINFORCEMENT LEARNI
[2]  
Carlson Trevor E., SC 2011
[3]  
Dhiman G., ISLPED 2007
[4]  
Gordon-Ross A., 2007, DAC 44
[5]  
Henkel J., 2013, CODES ISSS
[6]  
JUAN D.-C, 2012, ISLPED 2012
[7]  
Li S., MICRO 2009
[8]  
Qureshi M. K., 2006, INT S MICR
[9]  
Schaul T, 2010, J MACH LEARN RES, V11, P743
[10]  
Spiliopoulos V., 2011, POWER PERFORMANCE AD