FastTuner: Transferable Physical Design Parameter Optimization using Fast Reinforcement Learning

被引:1
|
作者
Hsiao, Hao-Hsiang [1 ]
Lu, Yi-Chen [1 ]
Vanna-Iampikul, Pruek [1 ]
Lim, Sung Kyu [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
来源
PROCEEDINGS OF THE 2024 INTERNATIONAL SYMPOSIUM ON PHYSICAL DESIGN, ISPD 2024 | 2024年
基金
美国国家科学基金会;
关键词
Physical Design; Reinforcement Learning;
D O I
10.1145/3626184.3633328
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Current state-of-the-art Design Space Exploration (DSE) methods in Physical Design (PD), including Bayesian optimization (BO) and Ant Colony Optimization (ACO), mainly rely on black-boxed rather than parametric (e.g., neural networks) approaches to improve end-of-flow Power, Performance, and Area (PPA) metrics, which often fail to generalize across unseen designs as netlist features are not properly leveraged. To overcome this issue, in this paper, we develop a Reinforcement Learning (RL) agent that leverages Graph Neural Networks (GNNs) and Transformers to perform "fast" DSE on unseen designs by sequentially encoding netlist features across different PD stages. Particularly, an attention-based encoder-decoder framework is devised for "conditional" parameter tuning, and a PPA estimator is introduced to predict end-of-flow PPA metrics for RL reward estimation. Extensive studies across 7 industrial designs under the TSMC 28nm technology node demonstrate that the proposed framework FastTuner, significantly outperforms existing state-of-the-art DSE techniques in both optimization quality and runtime. where we observe improvements up to 79.38% in Total Negative Slack (TNS), 12.22% in total power, and 50x in runtime.
引用
收藏
页码:93 / 101
页数:9
相关论文
共 50 条
  • [31] SIMULTANEOUS FEATURE SELECTION AND PARAMETER OPTIMIZATION FOR TRAINING OF DIALOG POLICY BY REINFORCEMENT LEARNING
    Misu, Teruhisa
    Kashioka, Hideki
    2012 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2012), 2012, : 1 - 6
  • [32] An Admittance Parameter Optimization Method Based on Reinforcement Learning for Robot Force Control
    Hu, Xiaoyi
    Liu, Gongping
    Ren, Peipei
    Jia, Bing
    Liang, Yiwen
    Li, Longxi
    Duan, Shilin
    ACTUATORS, 2024, 13 (09)
  • [33] Reinforcement-learning-based parameter adaptation method for particle swarm optimization
    Yin, Shiyuan
    Jin, Min
    Lu, Huaxiang
    Gong, Guoliang
    Mao, Wenyu
    Chen, Gang
    Li, Wenchang
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (05) : 5585 - 5609
  • [34] A parameter optimization method for deep reinforcement learning by evolution strategy using multiple higher-ranked individuals
    Tsuchida T.
    Yamaguchi S.
    IEEJ Transactions on Electronics, Information and Systems, 2020, 140 (08) : 366
  • [35] Impedance control and parameter optimization of surface polishing robot based on reinforcement learning
    Ding, Yufeng
    Zhao, JunChao
    Min, Xinpu
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART B-JOURNAL OF ENGINEERING MANUFACTURE, 2023, 237 (1-2) : 216 - 228
  • [36] Prioritized Reinforcement Learning for Analog Circuit Optimization With Design Knowledge
    Somayaji, Karthik N. S.
    Hu, Hanbin
    Li, Peng
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 1231 - 1236
  • [37] NoC Application Mapping Optimization Using Reinforcement Learning
    Jagadheesh, Samala
    Bhanu, P. Veda
    Soumya, J.
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2022, 27 (06)
  • [38] OPTIMIZATION OF WELDING ROUTE USING REINFORCEMENT LEARNING METHOD
    Okumoto, Yasuhisa
    ICIM 2008: PROCEEDINGS OF THE NINTH INTERNATIONAL CONFERENCE ON INDUSTRIAL MANAGEMENT, 2008, : 83 - 88
  • [39] Image Captioning using Reinforcement Learning with BLUDEr Optimization
    P. R. Devi
    V. Thrivikraman
    D. Kashyap
    S. S. Shylaja
    Pattern Recognition and Image Analysis, 2020, 30 : 607 - 613
  • [40] Image Captioning using Reinforcement Learning with BLUDEr Optimization
    Devi, P. R.
    Thrivikraman, V
    Kashyap, D.
    Shylaja, S. S.
    PATTERN RECOGNITION AND IMAGE ANALYSIS, 2020, 30 (04) : 607 - 613