Deep reinforcement learning and Bayesian optimization based OpAmp design across the CMOS process space

被引:0
作者
Papageorgiou, Eleni [1 ]
Buzo, Andi [2 ]
Pelz, Georg [2 ]
Noulis, Thomas [1 ,3 ]
机构
[1] Aristotle Univ Thessaloniki, Dept Phys, Thessaloniki 54124, Greece
[2] Infineon Technol AG, Munich, Germany
[3] Ctr Interdisciplinary Res & Innovat CIRI AUTH, Thessaloniki, Greece
关键词
Analog design; Reinforcement learning; Bayesian optimization; Automation; OpAmp design;
D O I
10.1016/j.aeue.2025.155697
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this work, we propose a Deep Reinforcement Learning (DRL)-based method for the multi-objective optimization of circuit parameters. The approach leverages a custom Reinforcement Learning environment, enhanced with memoization techniques to minimize simulation iterations and improve efficiency, alongside Bayesian Optimization to narrow the design space. This method generates multiple solutions that not only meet specific performance targets but also surpass them, allowing designers to select the most suitable option based on performance trade-offs. The approach is validated on a two-stage operational amplifier topology, implemented across three different process nodes: 22 nm, 65 nm, and 180 nm. The resulting solutions create a visualization of the design space, offering intuitive and reliable insights into key performance metrics and design trends derived from the agent's exploration. By integrating this DRL-based approach into the analog circuit design workflow, the time-to-market is significantly reduced, while the method enhances the capabilities of design experts by automating parameter selection and optimization.
引用
收藏
页数:9
相关论文
共 50 条
[31]   A systematic method for the optimization of gas supply reliability in natural gas pipeline network based on Bayesian networks and deep reinforcement learning [J].
Fan, Lin ;
Su, Huai ;
Wang, Wei ;
Zio, Enrico ;
Zhang, Li ;
Yang, Zhaoming ;
Peng, Shiliang ;
Yu, Weichao ;
Zuo, Lili ;
Zhang, Jinjun .
RELIABILITY ENGINEERING & SYSTEM SAFETY, 2022, 225
[32]   Reinforcement learning based process optimization and strategy development in conventional tunneling [J].
Erharter, Georg H. ;
Hansen, Tom F. ;
Liu, Zhongqiang ;
Marcher, Thomas .
AUTOMATION IN CONSTRUCTION, 2021, 127
[33]   Reinforcement learning based optimization of process chromatography for continuous processing of biopharmaceuticals [J].
Nikita, Saxena ;
Tiwari, Anamika ;
Sonawat, Deepak ;
Kodamana, Hariprasad ;
Rathore, Anurag S. .
CHEMICAL ENGINEERING SCIENCE, 2021, 230
[34]   Deep Reinforcement Learning-Based Power Distribution Network Design Optimization for Multi-Chiplet System [J].
Miao, Weiyang ;
Xie, Zhen ;
Tan, Chuan Seng ;
Rotaru, Mihai D. .
PROCEEDINGS OF THE IEEE 74TH ELECTRONIC COMPONENTS AND TECHNOLOGY CONFERENCE, ECTC 2024, 2024, :1716-1723
[35]   Deep Reinforcement Learning for Analog Circuit Sizing with an Electrical Design Space and Sparse Rewards [J].
Uhlmann, Yannick ;
Essich, Michael ;
Bramlage, Lennart ;
Scheible, Jürgen ;
Curio, Cristobal .
MLCAD '22: PROCEEDINGS OF THE 2022 ACM/IEEE 4TH WORKSHOP ON MACHINE LEARNING FOR CAD (MLCAD), 2022, :21-26
[36]   A Comparative Tutorial of Bayesian Sequential Design and Reinforcement Learning [J].
Tec, Mauricio ;
Duan, Yunshan ;
Muller, Peter .
AMERICAN STATISTICIAN, 2023, 77 (02) :223-233
[37]   BAYESIAN OPTIMIZATION OF HYPER-PARAMETERS AND REWARD FUNCTION IN DEEP REINFORCEMENT LEARNING: APPLICATION TO BEHAVIOR LEARNING OF MOBILE ROBOT [J].
Nishimura, Takuto ;
Sota, Ryosuke ;
Horiuchi, Tadashi .
International Journal of Innovative Computing, Information and Control, 2025, 21 (02) :469-480
[38]   Automatic Curriculum Design for Object Transportation Based on Deep Reinforcement Learning [J].
Eoh, Gyuho ;
Park, Tae-Hyoung .
IEEE ACCESS, 2021, 9 :137281-137294
[39]   An Intellectual Aerodynamic Design Method for Compressors Based on Deep Reinforcement Learning [J].
Xu, Xiaohan ;
Huang, Xudong ;
Bi, Dianfang ;
Zhou, Ming .
AEROSPACE, 2023, 10 (02)
[40]   Design Method of Infrared Stealth Film Based on Deep Reinforcement Learning [J].
Zhang, Kunyuan ;
Liu, Delian ;
Yang, Shuo .
PHOTONICS, 2025, 12 (01)