Structured pruning of neural networks for constraints learning

被引:1
|
作者
Cacciola, Matteo [1 ]
Frangioni, Antonio [2 ]
Lodi, Andrea [3 ,4 ]
机构
[1] Polytech Montreal, CERC, Montreal, PQ, Canada
[2] Univ Pisa, Pisa, Italy
[3] Cornell Tech, New York, NY 10044 USA
[4] Technion IIT, New York, NY 10011 USA
基金
加拿大自然科学与工程研究理事会;
关键词
Artificial neural networks; Mixed integer programming; Model compression; Pruning; ANALYTICS;
D O I
10.1016/j.orl.2024.107194
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity in applications such as cancer treatment, algorithmic configuration, and chemical process optimization. This integration often uses Mixed Integer Programming (MIP) formulations to represent the chosen ML model, that is often an Artificial Neural Networks (ANNs) due to their widespread use. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations impractical to solve. In this paper we showcase the effectiveness of a ANN pruning, when applied to models prior to their integration into MIPs. We discuss why pruning is more suitable in this context than other ML compression techniques, and we highlight the potential of appropriate pruning strategies via experiments on MIPs used to construct adversarial examples to ANNs. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] Structured Pruning Strategy Based on Interpretable Machine Learning
    Li, Ziqi
    Song, Zichen
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024, 2024, : 801 - 804
  • [22] Partition Pruning: Parallelization-Aware Pruning for Dense Neural Networks
    Shahhosseini, Sina
    Albaqsami, Ahmad
    Jasemi, Masoomeh
    Bagherzadeh, Nader
    2020 28TH EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED AND NETWORK-BASED PROCESSING (PDP 2020), 2020, : 307 - 311
  • [23] Multiobjective evolutionary pruning of Deep Neural Networks with Transfer Learning for improving their performance and robustness
    Poyatos, Javier
    Molina, Daniel
    Martinez-Seras, Aitor
    Del Ser, Javier
    Herrera, Francisco
    APPLIED SOFT COMPUTING, 2023, 147
  • [24] Multi-objective pruning of dense neural networks using deep reinforcement learning
    Hirsch, Lior
    Katz, Gilad
    INFORMATION SCIENCES, 2022, 610 : 381 - 400
  • [25] Pruning recurrent neural networks replicates adolescent changes in working memory and reinforcement learning
    Averbeck, Bruno B.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2022, 119 (22)
  • [26] Iterative clustering pruning for convolutional neural networks
    Chang, Jingfei
    Lu, Yang
    Xue, Ping
    Xu, Yiqun
    Wei, Zhen
    KNOWLEDGE-BASED SYSTEMS, 2023, 265
  • [27] Magnitude and Uncertainty Pruning Criterion for Neural Networks
    Ko, Vinnie
    Oehmcke, Stefan
    Gieseke, Fabian
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 2317 - 2326
  • [28] Flattening Layer Pruning in Convolutional Neural Networks
    Jeczmionek, Ernest
    Kowalski, Piotr A.
    SYMMETRY-BASEL, 2021, 13 (07):
  • [29] DyPrune: Dynamic Pruning Rates for Neural Networks
    Aires Jonker, Richard Adolph
    Poudel, Roshan
    Fajarda, Olga
    Oliveira, Jose Luis
    Lopes, Rui Pedro
    Matos, Sergio
    PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT I, 2023, 14115 : 146 - 157
  • [30] Sparse optimization guided pruning for neural networks
    Shi, Yong
    Tang, Anda
    Niu, Lingfeng
    Zhou, Ruizhi
    NEUROCOMPUTING, 2024, 574