Deep reinforcement learning and Bayesian optimization based OpAmp design across the CMOS process space

被引:4
作者
Papageorgiou, Eleni [1 ]
Buzo, Andi [2 ]
Pelz, Georg [2 ]
Noulis, Thomas [1 ,3 ]
机构
[1] Aristotle Univ Thessaloniki, Dept Phys, Thessaloniki 54124, Greece
[2] Infineon Technol AG, Munich, Germany
[3] Ctr Interdisciplinary Res & Innovat CIRI AUTH, Thessaloniki, Greece
关键词
Analog design; Reinforcement learning; Bayesian optimization; Automation; OpAmp design;
D O I
10.1016/j.aeue.2025.155697
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this work, we propose a Deep Reinforcement Learning (DRL)-based method for the multi-objective optimization of circuit parameters. The approach leverages a custom Reinforcement Learning environment, enhanced with memoization techniques to minimize simulation iterations and improve efficiency, alongside Bayesian Optimization to narrow the design space. This method generates multiple solutions that not only meet specific performance targets but also surpass them, allowing designers to select the most suitable option based on performance trade-offs. The approach is validated on a two-stage operational amplifier topology, implemented across three different process nodes: 22 nm, 65 nm, and 180 nm. The resulting solutions create a visualization of the design space, offering intuitive and reliable insights into key performance metrics and design trends derived from the agent's exploration. By integrating this DRL-based approach into the analog circuit design workflow, the time-to-market is significantly reduced, while the method enhances the capabilities of design experts by automating parameter selection and optimization.
引用
收藏
页数:9
相关论文
共 50 条
[21]   Cost Optimization at Early Stages of Design Using Deep Reinforcement Learning [J].
Servadei, Lorenzo ;
Zheng, Jiapeng ;
Arjona-Medina, Jose ;
Werner, Michael ;
Esen, Volkan ;
Hochreiter, Sepp ;
Ecker, Wolfgang ;
Wille, Robert .
PROCEEDINGS OF THE 2020 ACM/IEEE 2ND WORKSHOP ON MACHINE LEARNING FOR CAD (MLCAD '20), 2020, :37-42
[22]   Deep Reinforcement Learning-Based Optimization Framework with Continuous Action Space for LNG Liquefaction Processes [J].
Lee, Jieun ;
Park, Kyungtae .
KOREAN JOURNAL OF CHEMICAL ENGINEERING, 2025, 42 (08) :1613-1628
[23]   Design optimization of axial slot casing treatment and blade in an axial compressor based on deep learning and reinforcement learning [J].
Fan, Zhonggang ;
Wu, Yueteng ;
Ba, Dun ;
Zhang, Min ;
Liu, Yang ;
Du, Juan .
AEROSPACE SCIENCE AND TECHNOLOGY, 2025, 162
[24]   Deep Reinforcement Learning for Multiobjective Optimization [J].
Li, Kaiwen ;
Zhang, Tao ;
Wang, Rui .
IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (06) :3103-3114
[25]   Reinforcement learning for deep portfolio optimization [J].
Yan, Ruyu ;
Jin, Jiafei ;
Han, Kun .
ELECTRONIC RESEARCH ARCHIVE, 2024, 32 (09) :5176-5200
[26]   Bayesian Optimization with Active Constraint Learning for Advanced Manufacturing Process Design [J].
Li, Guoyan ;
Wang, Yujia ;
Kar, Swastik ;
Jin, Xiaoning .
IISE TRANSACTIONS, 2025,
[27]   Optimization of tobacco drying process control based on reinforcement learning [J].
Bi, Suhuan ;
Zhang, Bin ;
Mu, Liangliang ;
Ding, Xiangqian ;
Wang, Jing .
DRYING TECHNOLOGY, 2020, 38 (10) :1291-1299
[28]   Policy-Based Bayesian Active Causal Discovery with Deep Reinforcement Learning [J].
Gao, Heyang ;
Sun, Zexu ;
Yang, Hao ;
Chen, Xu .
PROCEEDINGS OF THE 30TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2024, 2024, :839-850
[29]   Business Process Optimization with Reinforcement Learning [J].
Silvander, Johan .
BUSINESS MODELING AND SOFTWARE DESIGN, BMSD 2019, 2019, 356 :203-212
[30]   Action Space Shaping in Deep Reinforcement Learning [J].
Kanervisto, Anssi ;
Scheller, Christian ;
Hautamaki, Ville .
2020 IEEE CONFERENCE ON GAMES (IEEE COG 2020), 2020, :479-486