FastTuner: Transferable Physical Design Parameter Optimization using Fast Reinforcement Learning

被引:1
|
作者
Hsiao, Hao-Hsiang [1 ]
Lu, Yi-Chen [1 ]
Vanna-Iampikul, Pruek [1 ]
Lim, Sung Kyu [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
来源
PROCEEDINGS OF THE 2024 INTERNATIONAL SYMPOSIUM ON PHYSICAL DESIGN, ISPD 2024 | 2024年
基金
美国国家科学基金会;
关键词
Physical Design; Reinforcement Learning;
D O I
10.1145/3626184.3633328
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Current state-of-the-art Design Space Exploration (DSE) methods in Physical Design (PD), including Bayesian optimization (BO) and Ant Colony Optimization (ACO), mainly rely on black-boxed rather than parametric (e.g., neural networks) approaches to improve end-of-flow Power, Performance, and Area (PPA) metrics, which often fail to generalize across unseen designs as netlist features are not properly leveraged. To overcome this issue, in this paper, we develop a Reinforcement Learning (RL) agent that leverages Graph Neural Networks (GNNs) and Transformers to perform "fast" DSE on unseen designs by sequentially encoding netlist features across different PD stages. Particularly, an attention-based encoder-decoder framework is devised for "conditional" parameter tuning, and a PPA estimator is introduced to predict end-of-flow PPA metrics for RL reward estimation. Extensive studies across 7 industrial designs under the TSMC 28nm technology node demonstrate that the proposed framework FastTuner, significantly outperforms existing state-of-the-art DSE techniques in both optimization quality and runtime. where we observe improvements up to 79.38% in Total Negative Slack (TNS), 12.22% in total power, and 50x in runtime.
引用
收藏
页码:93 / 101
页数:9
相关论文
共 50 条
  • [1] Parameter optimization design of MFAC based on Reinforcement Learning
    Liu, Shida
    Jia, Xiongbo
    Ji, Honghai
    Fan, Lingling
    2023 IEEE 12TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE, DDCLS, 2023, : 1036 - 1043
  • [2] Information Optimization and Transferable State Abstractions in Deep Reinforcement Learning
    Gomez, Diego
    Quijano, Nicanor
    Giraldo, Luis Felipe
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4782 - 4793
  • [3] Fast reinforcement learning for simple physical robots
    Hartono P.
    Kakita S.
    Memetic Computing, 2009, 1 (4) : 305 - 313
  • [4] Simultaneous Process Design and Control Optimization using Reinforcement Learning
    Sachio, Steven
    Chanona, Antonio E. del-Rio
    Petsagkourakis, Panagiotis
    IFAC PAPERSONLINE, 2021, 54 (03): : 510 - 515
  • [5] Design optimization of heat exchanger using deep reinforcement learning
    Lee, Geunhyeong
    Joo, Younghwan
    Lee, Sung-Uk
    Kim, Taejoon
    Yu, Yonggyun
    Kim, Hyun-Gil
    INTERNATIONAL COMMUNICATIONS IN HEAT AND MASS TRANSFER, 2024, 159
  • [6] Design and optimization of a thermoacoustic heat engine using reinforcement learning
    Mumith, Jurriath-Azmathi
    Karayiannis, Tassos
    Makatsoris, Charalampos
    INTERNATIONAL JOURNAL OF LOW-CARBON TECHNOLOGIES, 2016, 11 (03) : 431 - 439
  • [7] Optimization of a physical internet based supply chain using reinforcement learning
    Puskas, Eszter
    Budai, Adam
    Bohacs, Gabor
    EUROPEAN TRANSPORT RESEARCH REVIEW, 2020, 12 (01)
  • [8] DESIGN OF FOURBAR LINKAGES USING A REINFORCEMENT LEARNING OPTIMIZATION METHOD
    Gallego, Juan A.
    Munoz, Juan M.
    Viquerat, Jonathan
    Aguirre, Milton E.
    PROCEEDINGS OF ASME 2022 INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, IDETC-CIE2022, VOL 7, 2022,
  • [9] Optimization of a physical internet based supply chain using reinforcement learning
    Eszter Puskás
    Ádám Budai
    Gábor Bohács
    European Transport Research Review, 2020, 12
  • [10] Parameter Design Optimization for DC-DC Power Converters with Deep Reinforcement Learning
    Tian, Fanghao
    Cobaleda, Diego Bernal
    Wouters, Hans
    Martinez, Wilmar
    2022 IEEE ENERGY CONVERSION CONGRESS AND EXPOSITION (ECCE), 2022,