Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations

被引:6
作者
Chaspari, Theodora [1 ]
Tsiartas, Andreas [2 ]
Tsilifis, Panagiotis [3 ]
Narayanan, Shrikanth S. [1 ]
机构
[1] Univ So Calif, Ming Hsieh Dept Elect Engn, Signal Anal & Interpretat Lab, Los Angeles, CA 90089 USA
[2] SRI Int, 333 Ravenswood Ave, Menlo Pk, CA 94025 USA
[3] Univ So Calif, Dept Math, Dana & David Dornsife Coll Letters Arts & Sci, Los Angeles, CA 90089 USA
基金
美国国家科学基金会;
关键词
Dictionary learning; parametric dictionaries; Bayesian inference; Markov chain Monte Carlo; sparse representation; uniform ergodicity; SAMPLING METHODS; SIGNAL; CONVERGENCE; HASTINGS; REPRESENTATIONS; TRANSFORM; WALLENIUS; FRAMEWORK; RATES;
D O I
10.1109/TSP.2016.2539143
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation, and other applications.
引用
收藏
页码:3077 / 3092
页数:16
相关论文
共 83 条
  • [1] K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation
    Aharon, Michal
    Elad, Michael
    Bruckstein, Alfred
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2006, 54 (11) : 4311 - 4322
  • [2] Sparse and Redundant Modeling of Image Content Using an Image-Signature-Dictionary
    Aharon, Michal
    Elad, Michael
    [J]. SIAM JOURNAL ON IMAGING SCIENCES, 2008, 1 (03) : 228 - 247
  • [3] [Anonymous], 2005, TPAMI
  • [4] [Anonymous], 1993, SIGN SYST COMP 1993
  • [5] [Anonymous], 2008, P ADV NEURAL INFORM
  • [6] [Anonymous], 1995, Federal Reserve Bank of Minneapolis, DOI DOI 10.21034/SR.148
  • [7] Ataee M., 2010, Proceedings of ICASSP, P1987
  • [8] Multivariate temporal dictionary learning for EEG
    Barthelemy, Q.
    Gouy-Pailler, C.
    Isaac, Y.
    Souloumiac, A.
    Larue, A.
    Mars, J. I.
    [J]. JOURNAL OF NEUROSCIENCE METHODS, 2013, 215 (01) : 19 - 28
  • [9] Bedard M., 2008, International Journal of Statistical Sciences, V9, P33
  • [10] Model-based machine learning
    Bishop, Christopher M.
    [J]. PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2013, 371 (1984):