FastTuner: Transferable Physical Design Parameter Optimization using Fast Reinforcement Learning

被引:1
|
作者
Hsiao, Hao-Hsiang [1 ]
Lu, Yi-Chen [1 ]
Vanna-Iampikul, Pruek [1 ]
Lim, Sung Kyu [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
来源
PROCEEDINGS OF THE 2024 INTERNATIONAL SYMPOSIUM ON PHYSICAL DESIGN, ISPD 2024 | 2024年
基金
美国国家科学基金会;
关键词
Physical Design; Reinforcement Learning;
D O I
10.1145/3626184.3633328
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Current state-of-the-art Design Space Exploration (DSE) methods in Physical Design (PD), including Bayesian optimization (BO) and Ant Colony Optimization (ACO), mainly rely on black-boxed rather than parametric (e.g., neural networks) approaches to improve end-of-flow Power, Performance, and Area (PPA) metrics, which often fail to generalize across unseen designs as netlist features are not properly leveraged. To overcome this issue, in this paper, we develop a Reinforcement Learning (RL) agent that leverages Graph Neural Networks (GNNs) and Transformers to perform "fast" DSE on unseen designs by sequentially encoding netlist features across different PD stages. Particularly, an attention-based encoder-decoder framework is devised for "conditional" parameter tuning, and a PPA estimator is introduced to predict end-of-flow PPA metrics for RL reward estimation. Extensive studies across 7 industrial designs under the TSMC 28nm technology node demonstrate that the proposed framework FastTuner, significantly outperforms existing state-of-the-art DSE techniques in both optimization quality and runtime. where we observe improvements up to 79.38% in Total Negative Slack (TNS), 12.22% in total power, and 50x in runtime.
引用
收藏
页码:93 / 101
页数:9
相关论文
共 50 条
  • [21] Sequential Banner Design Optimization with Deep Reinforcement Learning
    Kondo, Yusuke
    Wang, Xueting
    Seshime, Hiroyuki
    Yamasaki, Toshihiko
    23RD IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM 2021), 2021, : 253 - 256
  • [22] Parameter Optimization of Multiple Resonant Controller: A Deep Reinforcement Learning Approach
    Zhang, Xiaojie
    Lei, Wanjun
    Dai, Yuqi
    Tang, Qibo
    Yuan, Xiaojie
    Xiao, Zhongxiu
    Lv, Gaotai
    2020 IEEE 9TH INTERNATIONAL POWER ELECTRONICS AND MOTION CONTROL CONFERENCE (IPEMC2020-ECCE ASIA), 2020, : 2578 - 2581
  • [23] Automating Reinforcement Learning Architecture Design for Code Optimization
    Wang, Huanting
    Tang, Zhanyong
    Zhang, Cheng
    Zhao, Jiaqi
    Cummins, Chris
    Leather, Hugh
    Wang, Zheng
    CC'22: PROCEEDINGS OF THE 31ST ACM SIGPLAN INTERNATIONAL CONFERENCE ON COMPILER CONSTRUCTION, 2022, : 129 - 143
  • [24] Reinforcement Learning for guiding optimization processes in optical design
    Fu, Cailing
    Stollenwerk, Jochen
    Holly, Carlo
    APPLICATIONS OF MACHINE LEARNING 2022, 2022, 12227
  • [25] Optimization of Obstacle Avoidance Using Reinforcement Learning
    Kominami, Keishi
    Takubo, Tomohito
    Ohara, Kenichi
    Mae, Yasushi
    Arai, Tatsuo
    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2012, : 67 - 72
  • [26] Robot control optimization using reinforcement learning
    Song, KT
    Sun, WY
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 1998, 21 (03) : 221 - 238
  • [27] Optimization of Reinforcement Learning Using Quantum Computation
    Ravish, Roopa
    Bhat, Nischal R.
    Nandakumar, N.
    Sagar, S.
    Sunil, Prasad B.
    Honnavalli, Prasad B.
    IEEE ACCESS, 2024, 12 : 179396 - 179417
  • [28] Robot Control Optimization Using Reinforcement Learning
    Kai-Tai Song
    Wen-Yu Sun
    Journal of Intelligent and Robotic Systems, 1998, 21 : 221 - 238
  • [29] Deep Reinforcement Learning for Optimization at Early Design Stages
    Servadei, Lorenzo
    Lee, Jin Hwa
    Arjona Medina, Jose A.
    Werner, Michael
    Hochreiter, Sepp
    Ecker, Wolfgang
    Wille, Robert
    IEEE DESIGN & TEST, 2023, 40 (01) : 43 - 51
  • [30] Reinforcement-learning-based parameter adaptation method for particle swarm optimization
    Shiyuan Yin
    Min Jin
    Huaxiang Lu
    Guoliang Gong
    Wenyu Mao
    Gang Chen
    Wenchang Li
    Complex & Intelligent Systems, 2023, 9 : 5585 - 5609