Structured pruning of neural networks for constraints learning

被引:1
|
作者
Cacciola, Matteo [1 ]
Frangioni, Antonio [2 ]
Lodi, Andrea [3 ,4 ]
机构
[1] Polytech Montreal, CERC, Montreal, PQ, Canada
[2] Univ Pisa, Pisa, Italy
[3] Cornell Tech, New York, NY 10044 USA
[4] Technion IIT, New York, NY 10011 USA
基金
加拿大自然科学与工程研究理事会;
关键词
Artificial neural networks; Mixed integer programming; Model compression; Pruning; ANALYTICS;
D O I
10.1016/j.orl.2024.107194
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
In recent years, the integration of Machine Learning (ML) models with Operation Research (OR) tools has gained popularity in applications such as cancer treatment, algorithmic configuration, and chemical process optimization. This integration often uses Mixed Integer Programming (MIP) formulations to represent the chosen ML model, that is often an Artificial Neural Networks (ANNs) due to their widespread use. However, ANNs frequently contain a large number of parameters, resulting in MIP formulations impractical to solve. In this paper we showcase the effectiveness of a ANN pruning, when applied to models prior to their integration into MIPs. We discuss why pruning is more suitable in this context than other ML compression techniques, and we highlight the potential of appropriate pruning strategies via experiments on MIPs used to construct adversarial examples to ANNs. Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision, enabling the resolution of previously unsolvable instances.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Optimal pruning in neural networks
    Barbato, DML
    Kinouchi, O
    PHYSICAL REVIEW E, 2000, 62 (06): : 8387 - 8394
  • [32] ON THE ROLE OF STRUCTURED PRUNING FOR NEURAL NETWORK COMPRESSION
    Bragagnolo, Andrea
    Tartaglione, Enzo
    Fiandrotti, Attilio
    Grangetto, Marco
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3527 - 3531
  • [33] LEARNING IN NEURAL NETWORKS WITH PARTIALLY STRUCTURED SYNAPTIC TRANSITIONS
    BATTAGLIA, FP
    FUSI, S
    NETWORK-COMPUTATION IN NEURAL SYSTEMS, 1995, 6 (02) : 261 - 270
  • [34] Learning Structured Inference Neural Networks with Label Relations
    Hu, Hexiang
    Zhou, Guang-Tong
    Deng, Zhiwei
    Liao, Zicheng
    Mori, Greg
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2960 - 2968
  • [35] Learning Structured Weight Uncertainty in Bayesian Neural Networks
    Sun, Shengyang
    Chen, Changyou
    Carin, Lawrence
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 54, 2017, 54 : 1283 - 1292
  • [37] Efficient and sparse neural networks by pruning weights in a multiobjective learning approach
    Reiners, Malena
    Klamroth, Kathrin
    Heldmann, Fabian
    Stiglmayr, Michael
    COMPUTERS & OPERATIONS RESEARCH, 2022, 141
  • [38] QLP: Deep Q-Learning for Pruning Deep Neural Networks
    Camci, Efe
    Gupta, Manas
    Wu, Min
    Lin, Jie
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (10) : 6488 - 6501
  • [39] Filter Pruning for Efficient Transfer Learning in Deep Convolutional Neural Networks
    Reinhold, Caique
    Roisenberg, Mauro
    ARTIFICIAL INTELLIGENCEAND SOFT COMPUTING, PT I, 2019, 11508 : 191 - 202
  • [40] Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration
    He, Yang
    Ding, Yuhang
    Liu, Ping
    Zhu, Linchao
    Zhang, Hanwang
    Yang, Yi
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2006 - 2015