PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

被引:699
作者
Mallya, Arun [1 ]
Lazebnik, Svetlana [1 ]
机构
[1] Univ Illinois, Urbana, IL 61801 USA
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR.2018.00810
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic for getting. Inspired by network pruning techniques, we exploit redundancies in large deep networks to free up parameters that can then be employed to learn new tasks. By performing iterative pruning and network re-training, we are able to sequentially "pack" multiple tasks into a single network while ensuring minimal drop in performance and minimal storage overhead. Unlike prior work that uses proxy losses to maintain accuracy on older tasks, we always optimize for the task at hand. We perform extensive experiments on a variety of network architectures and large-scale datasets, and observe much better robustness against catastrophic forgetting than prior work. In particular, we are able to add three fine-grained classification tasks to a single ImageNet-trained VGG-16 network and achieve accuracies close to those of separately trained networks for each task.
引用
收藏
页码:7765 / 7773
页数:9
相关论文
共 30 条
  • [1] Aljundi R., 2017, CVPR
  • [2] [Anonymous], 2017, TPAMI
  • [3] [Anonymous], 2017, NIPS
  • [4] [Anonymous], ICLR
  • [5] [Anonymous], 2016, CVPR
  • [6] [Anonymous], 2017, ICLR
  • [7] [Anonymous], 2015, ICML
  • [8] [Anonymous], 2016, ICLR
  • [9] [Anonymous], 2014, NIPS WORKSH
  • [10] [Anonymous], 2015, NIPS