GateRL: Automated Circuit Design Framework of CMOS Logic Gates Using Reinforcement Learning

被引:4
作者
Nam, Hyoungsik [1 ]
Kim, Young-In [1 ]
Bae, Jina [1 ]
Lee, Junhee [1 ]
机构
[1] Kyung Hee Univ, Dept Informat Display, Seoul 02447, South Korea
基金
新加坡国家研究基金会;
关键词
automated circuit design; CMOS logic gate; reinforcement learning; action masking; DEEP; SCHEME;
D O I
10.3390/electronics10091032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a GateRL that is an automated circuit design framework of CMOS logic gates based on reinforcement learning. Because there are constraints in the connection of circuit elements, the action masking scheme is employed. It also reduces the size of the action space leading to the improvement on the learning speed. The GateRL consists of an agent for the action and an environment for state, mask, and reward. State and reward are generated from a connection matrix that describes the current circuit configuration, and the mask is obtained from a masking matrix based on constraints and current connection matrix. The action is given rise to by the deep Q-network of 4 fully connected network layers in the agent. In particular, separate replay buffers are devised for success transitions and failure transitions to expedite the training process. The proposed network is trained with 2 inputs, 1 output, 2 NMOS transistors, and 2 PMOS transistors to design all the target logic gates, such as buffer, inverter, AND, OR, NAND, and NOR. Consequently, the GateRL outputs one-transistor buffer, two-transistor inverter, two-transistor AND, two-transistor OR, three-transistor NAND, and three-transistor NOR. The operations of these resultant logics are verified by the SPICE simulation.
引用
收藏
页数:14
相关论文
共 50 条
[21]   Automated calibration of somatosensory stimulation using reinforcement learning [J].
Luigi Borda ;
Noemi Gozzi ;
Greta Preatoni ;
Giacomo Valle ;
Stanisa Raspopovic .
Journal of NeuroEngineering and Rehabilitation, 20
[22]   Automated calibration of somatosensory stimulation using reinforcement learning [J].
Borda, Luigi ;
Gozzi, Noemi ;
Preatoni, Greta ;
Valle, Giacomo ;
Raspopovic, Stanisa .
JOURNAL OF NEUROENGINEERING AND REHABILITATION, 2023, 20 (01)
[23]   Automated Vulnerability Exploitation Using Deep Reinforcement Learning [J].
Almajali, Anas ;
Al-Abed, Loiy ;
Yousef, Khalil M. Ahmad ;
Mohd, Bassam J. ;
Samamah, Zaid ;
Abu Shhadeh, Anas .
APPLIED SCIENCES-BASEL, 2024, 14 (20)
[24]   Modelling the Process of Learning Analytics Using a Reinforcement Learning Framework [J].
Choi, Samuel P. M. ;
Lam, Franklin S. S. .
INNOVATIONS IN OPEN AND FLEXIBLE EDUCATION, 2018, :243-251
[25]   Automated construction scheduling using deep reinforcement learning with valid action sampling [J].
Yao, Yuan ;
Tam, Vivian W. Y. ;
Wang, Jun ;
Le, Khoa N. ;
Butera, Anthony .
AUTOMATION IN CONSTRUCTION, 2024, 166
[26]   Fuzzy-logic-based reinforcement learning of admittance control for automated robotic manufacturing [J].
Prabhu, SM ;
Garg, DP .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 1998, 11 (01) :7-23
[27]   RLOP: A Framework Design for Offset Prefetching Combined with Reinforcement Learning [J].
Huang, Yan ;
Wang, Zhanyang .
PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND NETWORKS, VOL III, CENET 2023, 2024, 1127 :90-99
[28]   Multi-Modal Legged Locomotion Framework With Automated Residual Reinforcement Learning [J].
Yu, Chen ;
Rosendo, Andre .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) :10312-10319
[29]   On the Challenges of Quantum Circuit Encoding Using Deep and Reinforcement Learning [J].
Selig, Patrick ;
Murphy, Niall ;
Redmond, David ;
Caton, Simon .
IEEE ACCESS, 2025, 13 :75216-75230
[30]   Circuit Driving of RC Scale Cars using Reinforcement Learning [J].
Kwon, Minhyeok ;
Eun, Yongsoon .
2022 22ND INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2022), 2022, :217-221