Dataset Condensation with Distribution Matching

被引:85
作者
Zhao, Bo [1 ]
Bilen, Hakan [1 ]
机构
[1] Univ Edinburgh, Sch Informat, Edinburgh, Midlothian, Scotland
来源
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2023年
基金
英国工程与自然科学研究理事会;
关键词
NEURAL-NETWORKS;
D O I
10.1109/WACV56688.2023.00645
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Computational cost of training state-of-the-art deep models in many learning problems is rapidly increasing due to more sophisticated models and larger datasets. A recent promising direction for reducing training cost is dataset condensation that aims to replace the original large training set with a significantly smaller learned synthetic set while preserving the original information. While training deep models on the small set of condensed images can be extremely fast, their synthesis remains computationally expensive due to the complex bi-level optimization and second-order derivative computation. In this work, we propose a simple yet effective method that synthesizes condensed images by matching feature distributions of the synthetic and original training images in many sampled embedding spaces. Our method significantly reduces the synthesis cost while achieving comparable or better performance. Thanks to its efficiency, we apply our method to more realistic and larger datasets with sophisticated neural architectures and obtain a significant performance boost(1). We also show promising practical benefits of our method in continual learning and neural architecture search.
引用
收藏
页码:6503 / 6512
页数:10
相关论文
共 50 条
[11]   End-to-End Incremental Learning [J].
Castro, Francisco M. ;
Marin-Jimenez, Manuel J. ;
Guil, Nicolas ;
Schmid, Cordelia ;
Alahari, Karteek .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :241-257
[12]  
Cazenavette George, 2022, P IEEE CVF C COMP VI
[13]  
Chen Y., 2010, 26 C ANN C UNC ART I
[14]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[15]  
Elsken T, 2019, J MACH LEARN RES, V20
[16]  
Farahani RZ, 2009, CONTRIB MANAG SCI, P347, DOI 10.1007/978-3-7908-2151-2_15
[17]   Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy? [J].
Giryes, Raja ;
Sapiro, Guillermo ;
Bronstein, Alex M. .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2016, 64 (13) :3444-3457
[18]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[19]  
Gretton A, 2012, J MACH LEARN RES, V13, P723
[20]   Short-term forecasting of origin-destination matrix in transit system via a deep learning approach [J].
He, Yuxin ;
Zhao, Yang ;
Tsui, Kwok-Leung .
TRANSPORTMETRICA A-TRANSPORT SCIENCE, 2023, 19 (02)