FastTuner: Transferable Physical Design Parameter Optimization using Fast Reinforcement Learning

被引:1
|
作者
Hsiao, Hao-Hsiang [1 ]
Lu, Yi-Chen [1 ]
Vanna-Iampikul, Pruek [1 ]
Lim, Sung Kyu [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
来源
PROCEEDINGS OF THE 2024 INTERNATIONAL SYMPOSIUM ON PHYSICAL DESIGN, ISPD 2024 | 2024年
基金
美国国家科学基金会;
关键词
Physical Design; Reinforcement Learning;
D O I
10.1145/3626184.3633328
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Current state-of-the-art Design Space Exploration (DSE) methods in Physical Design (PD), including Bayesian optimization (BO) and Ant Colony Optimization (ACO), mainly rely on black-boxed rather than parametric (e.g., neural networks) approaches to improve end-of-flow Power, Performance, and Area (PPA) metrics, which often fail to generalize across unseen designs as netlist features are not properly leveraged. To overcome this issue, in this paper, we develop a Reinforcement Learning (RL) agent that leverages Graph Neural Networks (GNNs) and Transformers to perform "fast" DSE on unseen designs by sequentially encoding netlist features across different PD stages. Particularly, an attention-based encoder-decoder framework is devised for "conditional" parameter tuning, and a PPA estimator is introduced to predict end-of-flow PPA metrics for RL reward estimation. Extensive studies across 7 industrial designs under the TSMC 28nm technology node demonstrate that the proposed framework FastTuner, significantly outperforms existing state-of-the-art DSE techniques in both optimization quality and runtime. where we observe improvements up to 79.38% in Total Negative Slack (TNS), 12.22% in total power, and 50x in runtime.
引用
收藏
页码:93 / 101
页数:9
相关论文
共 50 条
  • [41] Optimization of steam injection in SAGD using reinforcement learning
    Guevara, J. L.
    Patel, Rajan
    Trivedi, Japan
    JOURNAL OF PETROLEUM SCIENCE AND ENGINEERING, 2021, 206
  • [42] Trajectory optimization using reinforcement, learning for map exploration
    Kollar, Thomas
    Roy, Nicholas
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2008, 27 (02) : 175 - 196
  • [43] Fast reinforcement learning using asymmetric probability density function
    Umesako, K
    Obayashi, M
    Kobayashi, K
    SICE 2002: PROCEEDINGS OF THE 41ST SICE ANNUAL CONFERENCE, VOLS 1-5, 2002, : 804 - 809
  • [44] Network parameter setting for reinforcement learning approaches using neural networks
    Yamada, Kazuaki
    Ohkura, Kazuhiro
    Nihon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C, 2012, 78 (792): : 2950 - 2961
  • [45] Network Parameter Setting for Reinforcement Learning Approaches Using Neural Networks
    Yamada, Kazuaki
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2011, 15 (07) : 822 - 830
  • [46] Optimizing parameter settings for hopfield neural networks using reinforcement learning
    Rbihou, Safae
    Joudar, Nour-Eddine
    Haddouch, Khalid
    EVOLVING SYSTEMS, 2024, 15 (06) : 2419 - 2440
  • [47] rocorl: Transferable Reinforcement Learning-Based Robust Control for Cyber-Physical Systems With Limited Data Updates
    Yoo, Gwangpyo
    Yoo, Minjong
    Yeom, Ikjun
    Woo, Honguk
    IEEE ACCESS, 2020, 8 : 225370 - 225383
  • [48] An Adaptive Online Parameter Control Algorithm for Particle Swarm Optimization Based on Reinforcement Learning
    Liu, Yaxian
    Lu, Hui
    Cheng, Shi
    Shi, Yuhui
    2019 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2019, : 815 - 822
  • [49] Evolutionary Computation and Reinforcement Learning for Cyber-physical System Design
    Lu, Chengjie
    2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS, ICSE-COMPANION, 2023, : 264 - 266
  • [50] Compositional design of multicomponent alloys using reinforcement learning
    Xian, Yuehui
    Dang, Pengfei
    Tian, Yuan
    Jiang, Xue
    Zhou, Yumei
    Ding, Xiangdong
    Sun, Jun
    Lookman, Turab
    Xue, Dezhen
    ACTA MATERIALIA, 2024, 274