Convolutional Analysis Operator Learning: Acceleration and Convergence

被引:35
作者
Chun, Il Yong [1 ,2 ]
Fessler, Jeffrey A. [1 ]
机构
[1] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48019 USA
[2] Univ Hawaii Manoa, Dept Elect Engn, Honolulu, HI 96822 USA
关键词
Convolution; Training; Kernel; Convolutional codes; Computed tomography; Convergence; Image reconstruction; Convolutional regularizer learning; convolutional dictionary learning; convolutional neural networks; unsupervised machine learning algorithms; nonconvex-nonsmooth optimization; block coordinate descent; inverse problems; X-ray computed tomography; COORDINATE DESCENT METHOD; IMAGE-RECONSTRUCTION; SPARSE; OPTIMIZATION; ALGORITHM; DICTIONARIES;
D O I
10.1109/TIP.2019.2937734
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional operator learning is gaining attention in many signal processing and computer vision applications. Learning kernels has mostly relied on so-called patch-domain approaches that extract and store many overlapping patches across training signals. Due to memory demands, patch-domain methods have limitations when learning kernels from large datasets - particularly with multi-layered structures, e.g., convolutional neural networks - or when applying the learned kernels to high-dimensional signal recovery problems. The so-called convolution approach does not store many overlapping patches, and thus overcomes the memory problems particularly with careful algorithmic designs; it has been studied within the "synthesis" signal model, e.g., convolutional dictionary learning. This paper proposes a new convolutional analysis operator learning (CAOL) framework that learns an analysis sparsifying regularizer with the convolution perspective, and develops a new convergent Block Proximal Extrapolated Gradient method using a Majorizer (BPEG-M) to solve the corresponding block multi-nonconvex problems. To learn diverse filters within the CAOL framework, this paper introduces an orthogonality constraint that enforces a tight-frame filter condition, and a regularizer that promotes diversity between filters. Numerical experiments show that, with sharp majorizers, BPEG-M significantly accelerates the CAOL convergence rate compared to the state-of-the-art block proximal gradient (BPG) method. Numerical experiments for sparse-view computational tomography show that a convolutional sparsifying regularizer learned via CAOL significantly improves reconstruction quality compared to a conventional edge-preserving regularizer. Using more and wider kernels in a learned regularizer better preserves edges in reconstructed images.
引用
收藏
页码:2108 / 2122
页数:15
相关论文
共 50 条
[21]   Approximation bounds for convolutional neural networks in operator learning [J].
Franco, Nicola Rares ;
Fresca, Stefania ;
Manzoni, Andrea ;
Zunino, Paolo .
NEURAL NETWORKS, 2023, 161 :129-141
[22]   Convergence Analysis of Mean Shift [J].
Yamasaki, Ryoya ;
Tanaka, Toshiyuki .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (10) :6688-6698
[23]   Image Denoising for Low-Dose CT via Convolutional Dictionary Learning and Neural Network [J].
Yan, Rongbiao ;
Liu, Yi ;
Liu, Yuhang ;
Wang, Lei ;
Zhao, Rongge ;
Bai, Yunjiao ;
Gui, Zhiguo .
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2023, 9 :83-93
[24]   Deep convolutional dictionary learning network for sparse view CT reconstruction with a group sparse prior [J].
Kang, Yanqin ;
Liu, Jin ;
Wu, Fan ;
Wang, Kun ;
Qiang, Jun ;
Hu, Dianlin ;
Zhang, Yikun .
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 244
[25]   Representation Learning via Cauchy Convolutional Sparse Coding [J].
Mayo, Perla ;
Karakus, Oktay ;
Holmes, Robin ;
Achim, Alin .
IEEE ACCESS, 2021, 9 (09) :100447-100459
[26]   ONLINE CONVOLUTIONAL DICTIONARY LEARNING [J].
Liu, Jialin ;
Garcia-Cardona, Cristina ;
Wohlberg, Brendt ;
Yin, Wotao .
2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, :1707-1711
[27]   Deep Feature Learning for Medical Image Analysis with Convolutional Autoencoder Neural Network [J].
Chen, Min ;
Shi, Xiaobo ;
Zhang, Yin ;
Wu, Di ;
Guizani, Mohsen .
IEEE TRANSACTIONS ON BIG DATA, 2021, 7 (04) :750-758
[28]   A Comprehensive Survey on Training Acceleration for Large Machine Learning Models in IoT [J].
Wang, Haozhao ;
Qu, Zhihao ;
Zhou, Qihua ;
Zhang, Haobo ;
Luo, Boyuan ;
Xu, Wenchao ;
Guo, Song ;
Li, Ruixuan .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (02) :939-963
[29]   OPTIMAL CONVERGENCE RATES FOR NESTEROV ACCELERATION [J].
Aujol, Jean-Francois ;
Dossal, Charles ;
Rondepierre, Aude .
SIAM JOURNAL ON OPTIMIZATION, 2019, 29 (04) :3131-3153
[30]   Acceleration of Deep Convolutional Neural Networks Using Adaptive Filter Pruning [J].
Singh, Pravendra ;
Verma, Vinay Kumar ;
Rai, Piyush ;
Namboodiri, Vinay P. .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (04) :838-847