SimLL: Similarity-Based Logic Locking Against Machine Learning Attacks

被引:1
|
作者
Chowdhury, Subhajit Dutta [1 ]
Yang, Kaixin [1 ]
Nuzzo, Pierluigi [1 ]
机构
[1] Univ Southern Calif, Ming Hsieh Dept Elect & Comp Engn, Los Angeles, CA 90007 USA
来源
2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC | 2023年
关键词
Topological similarity; graph neural networks; machine learning; link prediction; hardware security;
D O I
10.1109/DAC56929.2023.10247822
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Logic locking is a promising technique for protecting integrated circuit designs while outsourcing their fabrication. Recently, graph neural network (GNN)-based link prediction attacks have been developed which can successfully break all the multiplexer-based locking techniques that were expected to be learning-resilient. We present SimLL, a novel similarity-based locking technique which locks a design using multiplexers and shows robustness against the existing structure-exploiting oracle-less learning-based attacks. Aiming to confuse the machine learning (ML) models, SimLL introduces key-controlled multiplexers between logic gates or wires that exhibit high levels of topological and functional similarity. Empirical results show that SimLL can degrade the accuracy of existing ML-based attacks to approximately 50%, resulting in a negligible advantage over random guessing.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Addressing Adversarial Attacks Against Security Systems Based on Machine Learning
    Apruzzese, Giovanni
    Colajanni, Michele
    Ferretti, Luca
    Marchetti, Mirco
    2019 11TH INTERNATIONAL CONFERENCE ON CYBER CONFLICT (CYCON): SILENT BATTLE, 2019, : 383 - 400
  • [22] Similarity-Based Methods and Machine Learning Approaches for Target Prediction in Early Drug Discovery: Performance and Scope
    Mathai, Neann
    Kirchmair, Johannes
    INTERNATIONAL JOURNAL OF MOLECULAR SCIENCES, 2020, 21 (10)
  • [23] Conventional and machine learning approaches as countermeasures against hardware trojan attacks
    Liakos, Konstantinos G.
    Georgakilas, Georgios K.
    Moustakidis, Serafeim
    Sklavos, Nicolas
    Plessas, Fotis C.
    MICROPROCESSORS AND MICROSYSTEMS, 2020, 79
  • [24] CT PUF: Configurable Tristate PUF against Machine Learning Attacks
    Wu, Qiang
    Zhang, Jiliang
    2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,
  • [25] Effectiveness of machine learning based android malware detectors against adversarial attacks
    Jyothish, A.
    Mathew, Ashik
    Vinod, P.
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (03): : 2549 - 2569
  • [26] Set-Based Obfuscation for Strong PUFs Against Machine Learning Attacks
    Zhang, Jiliang
    Shen, Chaoqun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (01) : 288 - 300
  • [27] Data poisoning attacks against machine learning algorithms
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
  • [28] Oracle-based Logic Locking Attacks: Protect the Oracle Not Only the Netlist
    Kalligeros, Emmanouil
    Karousos, Nikolaos
    Karybali, Irene G.
    PROCEEDINGS OF THE 2020 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2020), 2020, : 939 - 944
  • [29] Is image-based CAPTCHA secure against attacks based on machine learning? An experimental study
    Alqahtani, Fatmah H.
    Alsulaiman, Fawaz A.
    COMPUTERS & SECURITY, 2020, 88
  • [30] NES-TL: Network Embedding Similarity-Based Transfer Learning
    Fu, Chenbo
    Zheng, Yongli
    Liu, Yi
    Xuan, Qi
    Chen, Guanrong
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2020, 7 (03): : 1607 - 1618