Automated design and optimization of distributed filter circuits using reinforcement learning

被引:0
|
作者
Gao, Peng [1 ]
Yu, Tao [1 ]
Wang, Fei [2 ]
Yuan, Ru-Yue
机构
[1] Qufu Normal Univ, Sch Cyber Sci & Engn, Qufu 273165, Shandong, Peoples R China
[2] Harbin Inst Technol Shenzhen, Sch Integrated Circuits, Shenzhen 518055, Guangdong, Peoples R China
基金
中国博士后科学基金;
关键词
electronic design automation; circuit design; filter circuit; reinforcement learning; NEURAL-NETWORKS;
D O I
10.1093/jcde/qwae066
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Designing distributed filter circuits (DFCs) is complex and time-consuming, involving setting and optimizing multiple hyperparameters. Traditional optimization methods, such as using the commercial finite element solver High-Frequency Structure Simulator to enumerate all parameter combinations with fixed steps and then simulate each combination, are not only time-consuming and labor-intensive but also rely heavily on the expertise and experience of electronics engineers, making it difficult to adapt to rapidly changing design requirements. Additionally, these commercial tools struggle with precise adjustments when parameters are sensitive to numerical changes, resulting in limited optimization effectiveness. This study proposes a novel end-to-end automated method for DFC design. The proposed method harnesses reinforcement learning (RL) algorithms, eliminating the dependence on the design experience of engineers. Thus, it significantly reduces the subjectivity and constraints associated with circuit design. The experimental findings demonstrate clear improvements in design efficiency and quality when comparing the proposed method with traditional engineer-driven methods. Furthermore, the proposed method achieves superior performance when designing complex or rapidly evolving DFCs, highlighting the substantial potential of RL in circuit design automation. In particular, compared with the existing DFC automation design method CircuitGNN, our method achieves an average performance improvement of 8.72%. Additionally, the execution efficiency of our method is 2000 times higher than CircuitGNN on the CPU and 241 times higher on the GPU. Graphical Abstract
引用
收藏
页码:60 / 76
页数:17
相关论文
共 50 条
  • [31] Automated calibration of somatosensory stimulation using reinforcement learning
    Borda, Luigi
    Gozzi, Noemi
    Preatoni, Greta
    Valle, Giacomo
    Raspopovic, Stanisa
    JOURNAL OF NEUROENGINEERING AND REHABILITATION, 2023, 20 (01)
  • [32] Sequential Banner Design Optimization with Deep Reinforcement Learning
    Kondo, Yusuke
    Wang, Xueting
    Seshime, Hiroyuki
    Yamasaki, Toshihiko
    23RD IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM 2021), 2021, : 253 - 256
  • [33] Automating Reinforcement Learning Architecture Design for Code Optimization
    Wang, Huanting
    Tang, Zhanyong
    Zhang, Cheng
    Zhao, Jiaqi
    Cummins, Chris
    Leather, Hugh
    Wang, Zheng
    CC'22: PROCEEDINGS OF THE 31ST ACM SIGPLAN INTERNATIONAL CONFERENCE ON COMPILER CONSTRUCTION, 2022, : 129 - 143
  • [34] Parameter optimization design of MFAC based on Reinforcement Learning
    Liu, Shida
    Jia, Xiongbo
    Ji, Honghai
    Fan, Lingling
    2023 IEEE 12TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE, DDCLS, 2023, : 1036 - 1043
  • [35] Automated calibration of somatosensory stimulation using reinforcement learning
    Luigi Borda
    Noemi Gozzi
    Greta Preatoni
    Giacomo Valle
    Stanisa Raspopovic
    Journal of NeuroEngineering and Rehabilitation, 20
  • [36] Reinforcement Learning for guiding optimization processes in optical design
    Fu, Cailing
    Stollenwerk, Jochen
    Holly, Carlo
    APPLICATIONS OF MACHINE LEARNING 2022, 2022, 12227
  • [37] Resource Management in Distributed SDN Using Reinforcement Learning
    Ma, Liang
    Zhang, Ziyao
    Ko, Bongjun
    Srivatsa, Mudhakar
    Leung, Kin K.
    GROUND/AIR MULTISENSOR INTEROPERABILITY, INTEGRATION, AND NETWORKING FOR PERSISTENT ISR IX, 2018, 10635
  • [38] Optimization of Obstacle Avoidance Using Reinforcement Learning
    Kominami, Keishi
    Takubo, Tomohito
    Ohara, Kenichi
    Mae, Yasushi
    Arai, Tatsuo
    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2012, : 67 - 72
  • [39] ?Deep reinforcement learning for engineering design through topology optimization of elementally discretized design domains?
    Brown, Nathan K.
    Garland, Anthony P.
    Fadel, Georges M.
    Li, Gang
    MATERIALS & DESIGN, 2022, 218
  • [40] Robot control optimization using reinforcement learning
    Song, KT
    Sun, WY
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 1998, 21 (03) : 221 - 238