Exploring Logic Optimizations with Reinforcement Learning and Graph Convolutional Network

被引:42
|
作者
Zhu, Keren [1 ]
Liu, Mingjie [1 ]
Chen, Hao [1 ]
Zhao, Zheng [2 ]
Pan, David Z. [1 ]
机构
[1] UT Austin, ECE Dept, Austin, TX 78712 USA
[2] Synopsys, Mountain View, CA USA
来源
PROCEEDINGS OF THE 2020 ACM/IEEE 2ND WORKSHOP ON MACHINE LEARNING FOR CAD (MLCAD '20) | 2020年
关键词
logic synthesis; reinforcement learning; graph neural network;
D O I
10.1145/3380446.3430622
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Logic synthesis for combinational circuits is to find the minimum equivalent representation for Boolean logic functions. A well-adopted logic synthesis paradigm represents the Boolean logic with standardized logic networks, such as and-inverter graphs (AIG), and performs logic minimization operations over the graph iteratively. Although the research for different logic representation and operations is fruitful, the sequence of using the operations are often determined by heuristics. We propose a Markov decision process (MDP) formulation of the logic synthesis problem and a reinforcement learning (RL) algorithm incorporating with graph convolutional network to explore the solution search space. The experimental results show that the proposed method outperforms the well-known logic synthesis heuristics with the same sequence length and action space.
引用
收藏
页码:145 / 150
页数:6
相关论文
共 50 条
  • [1] Reinforcement Learning based Recommendation with Graph Convolutional Q-network
    Lei, Yu
    Pei, Hongbin
    Yan, Hanqi
    Li, Wenjie
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 1757 - 1760
  • [2] GoodFloorplan: Graph Convolutional Network and Reinforcement Learning-Based Floorplanning
    Xu, Qi
    Geng, Hao
    Chen, Song
    Yuan, Bo
    Zhuo, Cheng
    Kang, Yi
    Wen, Xiaoqing
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (10) : 3492 - 3502
  • [3] GPM: A graph convolutional network based reinforcement learning framework for portfolio management
    Shi, Si
    Li, Jianjun
    Li, Guohui
    Pan, Peng
    Chen, Qi
    Sun, Qing
    NEUROCOMPUTING, 2022, 498 : 14 - 27
  • [4] Robust graph learning with graph convolutional network
    Wan, Yingying
    Yuan, Changan
    Zhan, Mengmeng
    Chen, Long
    INFORMATION PROCESSING & MANAGEMENT, 2022, 59 (03)
  • [5] Exploring Network Optimizations for Large-Scale Graph Analytics
    Que, Xinyu
    Checconi, Fabio
    Petrini, Fabrizio
    Liu, Xing
    Buono, Daniele
    PROCEEDINGS OF SC15: THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2015,
  • [6] Temporal graph convolutional network for multi-agent reinforcement learning of action detection
    Wang, Liangliang
    Liu, Jiayao
    Wang, Ke
    Ge, Lianzheng
    Liang, Peidong
    APPLIED SOFT COMPUTING, 2024, 163
  • [7] Production Scheduling based on Deep Reinforcement Learning using Graph Convolutional Neural Network
    Seito, Takanari
    Munakata, Satoshi
    ICAART: PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 2, 2020, : 766 - 772
  • [8] Automatic Virtual Network Embedding: A Deep Reinforcement Learning Approach With Graph Convolutional Networks
    Yan, Zhongxia
    Ge, Jingguo
    Wu, Yulei
    Li, Liangxiong
    Li, Tong
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (06) : 1040 - 1057
  • [9] Graph Convolutional Reinforcement Learning for Collaborative Queuing Agents
    Fawaz, Hassan
    Lesca, Julien
    Quang, Pham Tran Anh
    Leguay, Jeremie
    Zeghlache, Djamal
    Medagliani, Paolo
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (02): : 1363 - 1377
  • [10] Dynamic graph convolutional network for long-term traffic flow prediction with reinforcement learning
    Peng, Hao
    Du, Bowen
    Liu, Mingsheng
    Liu, Mingzhe
    Ji, Shumei
    Wang, Senzhang
    Zhang, Xu
    He, Lifang
    INFORMATION SCIENCES, 2021, 578 : 401 - 416