Structured pruning of neural networks for constraints learning

被引:1
作者
Cacciola, Matteo [1 ]
Frangioni, Antonio [2 ]
Lodi, Andrea [3 ,4 ]
机构
[1] Polytech Montreal, CERC, Montreal, PQ, Canada
[2] Univ Pisa, Pisa, Italy
[3] Cornell Tech, New York, NY 10044 USA
[4] Technion IIT, New York, NY 10011 USA
基金
加拿大自然科学与工程研究理事会;
关键词
Artificial neural networks; Mixed integer programming; Model compression; Pruning; ANALYTICS;
D O I
10.1016/j.orl.2024.107194
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity in applications such as cancer treatment, algorithmic configuration, and chemical process optimization. This integration often uses Mixed Integer Programming (MIP) formulations to represent the chosen ML model, that is often an Artificial Neural Networks (ANNs) due to their widespread use. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations impractical to solve. In this paper we showcase the effectiveness of a ANN pruning, when applied to models prior to their integration into MIPs. We discuss why pruning is more suitable in this context than other ML compression techniques, and we highlight the potential of appropriate pruning strategies via experiments on MIPs used to construct adversarial examples to ANNs. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES
    Jorgensen, Thomas D.
    Haynes, Barry P.
    Norlund, Charlotte C. F.
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2008, 18 (05) : 389 - 403
  • [32] Acceleration of Deep Convolutional Neural Networks Using Adaptive Filter Pruning
    Singh, Pravendra
    Verma, Vinay Kumar
    Rai, Piyush
    Namboodiri, Vinay P.
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (04) : 838 - 847
  • [33] Flattening Layer Pruning in Convolutional Neural Networks
    Jeczmionek, Ernest
    Kowalski, Piotr A.
    SYMMETRY-BASEL, 2021, 13 (07):
  • [34] Iterative clustering pruning for convolutional neural networks
    Chang, Jingfei
    Lu, Yang
    Xue, Ping
    Xu, Yiqun
    Wei, Zhen
    KNOWLEDGE-BASED SYSTEMS, 2023, 265
  • [35] Magnitude and Uncertainty Pruning Criterion for Neural Networks
    Ko, Vinnie
    Oehmcke, Stefan
    Gieseke, Fabian
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 2317 - 2326
  • [36] SFP: Similarity-based filter pruning for deep neural networks
    Li, Guoqing
    Li, Rengang
    Li, Tuo
    Shen, Chaoyao
    Zou, Xiaofeng
    Wang, Jiuyang
    Wang, Changhong
    Li, Nanjun
    INFORMATION SCIENCES, 2025, 689
  • [37] Learning Optimized Structure of Neural Networks by Hidden Node Pruning With L1 Regularization
    Xie, Xuetao
    Zhang, Huaqing
    Wang, Junze
    Chang, Qin
    Wang, Jian
    Pal, Nikhil R.
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (03) : 1333 - 1346
  • [38] Flexible Group-Level Pruning of Deep Neural Networks for On-Device Machine Learning
    Lee, Kwangbae
    Kim, Hoseung
    Lee, Hayun
    Shin, Dongkun
    PROCEEDINGS OF THE 2020 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2020), 2020, : 79 - 84
  • [39] Sensitivity-Informed Provable Pruning of Neural Networks
    Baykal, Cenk
    Liebenwein, Lucas
    Gilitschenski, Igor
    Feldman, Dan
    Rus, Daniela
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2022, 4 (01): : 26 - 45
  • [40] Pruning feature maps for efficient convolutional neural networks
    Guo, Xiao-ting
    Xie, Xin-shu
    Lang, Xun
    OPTIK, 2023, 281