Deep reinforcement learning and Bayesian optimization based OpAmp design across the CMOS process space

被引:0
|
作者
Papageorgiou, Eleni [1 ]
Buzo, Andi [2 ]
Pelz, Georg [2 ]
Noulis, Thomas [1 ,3 ]
机构
[1] Aristotle Univ Thessaloniki, Dept Phys, Thessaloniki 54124, Greece
[2] Infineon Technol AG, Munich, Germany
[3] Ctr Interdisciplinary Res & Innovat CIRI AUTH, Thessaloniki, Greece
关键词
Analog design; Reinforcement learning; Bayesian optimization; Automation; OpAmp design;
D O I
10.1016/j.aeue.2025.155697
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this work, we propose a Deep Reinforcement Learning (DRL)-based method for the multi-objective optimization of circuit parameters. The approach leverages a custom Reinforcement Learning environment, enhanced with memoization techniques to minimize simulation iterations and improve efficiency, alongside Bayesian Optimization to narrow the design space. This method generates multiple solutions that not only meet specific performance targets but also surpass them, allowing designers to select the most suitable option based on performance trade-offs. The approach is validated on a two-stage operational amplifier topology, implemented across three different process nodes: 22 nm, 65 nm, and 180 nm. The resulting solutions create a visualization of the design space, offering intuitive and reliable insights into key performance metrics and design trends derived from the agent's exploration. By integrating this DRL-based approach into the analog circuit design workflow, the time-to-market is significantly reduced, while the method enhances the capabilities of design experts by automating parameter selection and optimization.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Distributed Bayesian optimization of deep reinforcement learning algorithms
    Young, M. Todd
    Hinkle, Jacob D.
    Kannan, Ramakrishnan
    Ramanathan, Arvind
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2020, 139 : 43 - 52
  • [2] Ceramic Process Optimization and Automation Design Based on CAD and Reinforcement Learning
    Tu T.
    Liu M.
    Yu Y.
    Yang H.
    Computer-Aided Design and Applications, 2024, 21 (S23): : 100 - 116
  • [3] Sequential Banner Design Optimization with Deep Reinforcement Learning
    Kondo, Yusuke
    Wang, Xueting
    Seshime, Hiroyuki
    Yamasaki, Toshihiko
    23RD IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM 2021), 2021, : 253 - 256
  • [4] Deep Reinforcement Learning for Optimization at Early Design Stages
    Servadei, Lorenzo
    Lee, Jin Hwa
    Arjona Medina, Jose A.
    Werner, Michael
    Hochreiter, Sepp
    Ecker, Wolfgang
    Wille, Robert
    IEEE DESIGN & TEST, 2023, 40 (01) : 43 - 51
  • [5] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Xuan, Junyu
    Lu, Jie
    Yan, Zheng
    Zhang, Guangquan
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2019, 12 (01) : 164 - 171
  • [6] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Junyu Xuan
    Jie Lu
    Zheng Yan
    Guangquan Zhang
    International Journal of Computational Intelligence Systems, 2018, 12 : 164 - 171
  • [7] Optimization of laser annealing parameters based on bayesian reinforcement learning
    Chang, Chung-Yuan
    Feng, Yen-Wei
    Rawat, Tejender Singh
    Chen, Shih-Wei
    Lin, Albert Shihchun
    JOURNAL OF INTELLIGENT MANUFACTURING, 2024, 36 (4) : 2479 - 2492
  • [8] Design optimization of heat exchanger using deep reinforcement learning
    Lee, Geunhyeong
    Joo, Younghwan
    Lee, Sung-Uk
    Kim, Taejoon
    Yu, Yonggyun
    Kim, Hyun-Gil
    INTERNATIONAL COMMUNICATIONS IN HEAT AND MASS TRANSFER, 2024, 159
  • [9] An Antenna Optimization Framework Based on Deep Reinforcement Learning
    Peng, Fengling
    Chen, Xing
    IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, 2024, 72 (10) : 7594 - 7605
  • [10] Solvent extraction process design using deep reinforcement learning
    Plathottam S.J.
    Richey B.
    Curry G.
    Cresko J.
    Iloeje C.O.
    Journal of Advanced Manufacturing and Processing, 2021, 3 (02)