GateRL: Automated Circuit Design Framework of CMOS Logic Gates Using Reinforcement Learning

被引:4
作者
Nam, Hyoungsik [1 ]
Kim, Young-In [1 ]
Bae, Jina [1 ]
Lee, Junhee [1 ]
机构
[1] Kyung Hee Univ, Dept Informat Display, Seoul 02447, South Korea
基金
新加坡国家研究基金会;
关键词
automated circuit design; CMOS logic gate; reinforcement learning; action masking; DEEP; SCHEME;
D O I
10.3390/electronics10091032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a GateRL that is an automated circuit design framework of CMOS logic gates based on reinforcement learning. Because there are constraints in the connection of circuit elements, the action masking scheme is employed. It also reduces the size of the action space leading to the improvement on the learning speed. The GateRL consists of an agent for the action and an environment for state, mask, and reward. State and reward are generated from a connection matrix that describes the current circuit configuration, and the mask is obtained from a masking matrix based on constraints and current connection matrix. The action is given rise to by the deep Q-network of 4 fully connected network layers in the agent. In particular, separate replay buffers are devised for success transitions and failure transitions to expedite the training process. The proposed network is trained with 2 inputs, 1 output, 2 NMOS transistors, and 2 PMOS transistors to design all the target logic gates, such as buffer, inverter, AND, OR, NAND, and NOR. Consequently, the GateRL outputs one-transistor buffer, two-transistor inverter, two-transistor AND, two-transistor OR, three-transistor NAND, and three-transistor NOR. The operations of these resultant logics are verified by the SPICE simulation.
引用
收藏
页数:14
相关论文
共 50 条
[41]   Highway Exiting Planner for Automated Vehicles Using Reinforcement Learning [J].
Cao, Zhong ;
Yang, Diange ;
Xu, Shaobing ;
Peng, Huei ;
Li, Boqi ;
Feng, Shuo ;
Zhao, Ding .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (02) :990-1000
[42]   A Hybrid Framework for Functional Verification using Reinforcement Learning and Deep Learning [J].
Singh, Karunveer ;
Gupta, Rishabh ;
Gupta, Vikram ;
Fayyazi, Arash ;
Pedram, Massoud ;
Nazarian, Shahin .
GLSVLSI '19 - PROCEEDINGS OF THE 2019 ON GREAT LAKES SYMPOSIUM ON VLSI, 2019, :367-370
[43]   Automated Evaluation of Metrics for Immersive Test Using Reinforcement Learning [J].
Estrada, Roberto G. L. ;
de Oliveira, Anderson V. C. ;
Ortiz Diaz, Agustin Alejandro ;
da Costa, Jeferson B. ;
Domingos, Emerson S. .
HCI INTERNATIONAL 2024-LATE BREAKING POSTERS, HCII 2024, PT I, 2025, 2319 :188-197
[44]   Development of Automated Negotiation Models for Suppliers Using Reinforcement Learning [J].
Lee, Ga Hyun ;
Song, Byunghun ;
Jung, Jieun ;
Jeon, Hyun Woo .
ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS-PRODUCTION MANAGEMENT SYSTEMS FOR VOLATILE, UNCERTAIN, COMPLEX, AND AMBIGUOUS ENVIRONMENTS, APMS 2024, PT V, 2024, 732 :367-380
[45]   Incremental reinforcement learning for multi-objective analog circuit design acceleration [J].
Abuelnasr, Ahmed ;
Ragab, Ahmed ;
Amer, Mostafa ;
Gosselin, Benoit ;
Savaria, Yvon .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 129
[46]   Fuzzy logic control of dynamic quadrature booster using reinforcement learning [J].
Li, BH ;
Wu, QH ;
Wang, PY ;
Zhou, XX .
POWERCON '98: 1998 INTERNATIONAL CONFERENCE ON POWER SYSTEM TECHNOLOGY - PROCEEDINGS, VOLS 1 AND 2, 1998, :843-849
[47]   A Simulation of Ant Formation and Foraging using Fuzzy Logic and Reinforcement Learning [J].
Afshar, S. ;
Mahjoob, M. J. .
2008 IEEE CONFERENCE ON CYBERNETICS AND INTELLIGENT SYSTEMS, VOLS 1 AND 2, 2008, :1086-1091
[48]   Area-Driven FPGA Logic Synthesis Using Reinforcement Learning [J].
Zhou, Guanglei ;
Anderson, Jason H. .
2023 28TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC, 2023, :159-165
[49]   ARCS: Adaptive Reinforcement Learning Framework for Automated Cybersecurity Incident Response Strategy Optimization [J].
Ren, Shaochen ;
Jin, Jianian ;
Niu, Guanchong ;
Liu, Yang .
APPLIED SCIENCES-BASEL, 2025, 15 (02)
[50]   An Online Evolving Framework for Advancing Reinforcement-Learning based Automated Vehicle Control [J].
Han, Teawon ;
Nageshrao, Subramanya ;
Filev, Dimitar P. ;
Ozguner, Umit .
IFAC PAPERSONLINE, 2020, 53 (02) :8118-8123