Structured pruning of neural networks for constraints learning

被引:1
作者
Cacciola, Matteo [1 ]
Frangioni, Antonio [2 ]
Lodi, Andrea [3 ,4 ]
机构
[1] Polytech Montreal, CERC, Montreal, PQ, Canada
[2] Univ Pisa, Pisa, Italy
[3] Cornell Tech, New York, NY 10044 USA
[4] Technion IIT, New York, NY 10011 USA
基金
加拿大自然科学与工程研究理事会;
关键词
Artificial neural networks; Mixed integer programming; Model compression; Pruning; ANALYTICS;
D O I
10.1016/j.orl.2024.107194
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity in applications such as cancer treatment, algorithmic configuration, and chemical process optimization. This integration often uses Mixed Integer Programming (MIP) formulations to represent the chosen ML model, that is often an Artificial Neural Networks (ANNs) due to their widespread use. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations impractical to solve. In this paper we showcase the effectiveness of a ANN pruning, when applied to models prior to their integration into MIPs. We discuss why pruning is more suitable in this context than other ML compression techniques, and we highlight the potential of appropriate pruning strategies via experiments on MIPs used to construct adversarial examples to ANNs. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] CHANNEL PRUNING VIA GRADIENT OF MUTUAL INFORMATION FOR LIGHTWEIGHT CONVOLUTIONAL NEURAL NETWORKS
    Lee, Min Kyu
    Lee, Seunghyun
    Lee, Sang Hyuk
    Song, Byung Cheol
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1751 - 1755
  • [42] Reconstruction Error Aware Pruning for Accelerating Neural Networks
    Kamma, Koji
    Wada, Toshikazu
    ADVANCES IN VISUAL COMPUTING, ISVC 2019, PT I, 2020, 11844 : 59 - 72
  • [43] Trained Rank Pruning for Efficient Deep Neural Networks
    Xu, Yuhui
    Li, Yuxi
    Zhang, Shuai
    Wen, Wei
    Wang, Botao
    Dai, Wenrui
    Qi, Yingyong
    Chen, Yiran
    Lin, Weiyao
    Xiong, Hongkai
    FIFTH WORKSHOP ON ENERGY EFFICIENT MACHINE LEARNING AND COGNITIVE COMPUTING - NEURIPS EDITION (EMC2-NIPS 2019), 2019, : 14 - 17
  • [44] Evaluating Extended Pruning on Object Detection Neural Networks
    O'Keeffe, Simon
    Villing, Rudi
    2018 29TH IRISH SIGNALS AND SYSTEMS CONFERENCE (ISSC), 2018,
  • [45] CUP: Cluster Pruning for Compressing Deep Neural Networks
    Duggal, Rahul
    Xiao, Cao
    Vuduc, Richard
    Duen Horng Chau
    Sun, Jimeng
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 5102 - 5106
  • [46] ACP: ADAPTIVE CHANNEL PRUNING FOR EFFICIENT NEURAL NETWORKS
    Zhang, Yuan
    Yuan, Yuan
    Wang, Qi
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4488 - 4492
  • [47] Metaheuristics for pruning convolutional neural networks: A comparative study
    Palakonda, Vikas
    Tursunboev, Jamshid
    Kang, Jae-Mo
    Moon, Sunghwan
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 268
  • [48] Class-dependent Pruning of Deep Neural Networks
    Entezari, Rahim
    Saukh, Olga
    2020 IEEE SECOND WORKSHOP ON MACHINE LEARNING ON EDGE IN SENSOR SYSTEMS (SENSYS-ML 2020), 2020, : 13 - 18
  • [49] Conditional Automated Channel Pruning for Deep Neural Networks
    Liu, Yixin
    Guo, Yong
    Guo, Jiaxin
    Jiang, Luoqian
    Chen, Jian
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1275 - 1279
  • [50] Pruning graph neural networks by evaluating edge properties
    Wang, Li
    Huang, Wei
    Zhang, Miao
    Pan, Shirui
    Chang, Xiaojun
    Su, Steven Weidong
    KNOWLEDGE-BASED SYSTEMS, 2022, 256