Pruning and Quantizing Neural Belief Propagation Decoders

被引:36
作者
Buchberger, Andreas [1 ]
Hager, Christian [1 ]
Pfister, Henry D. [2 ]
Schmalen, Laurent [3 ]
Graell i Amat, Alexandre [1 ]
机构
[1] Chalmers Univ Technol, Dept Elect Engn, SE-41296 Gothenburg, Sweden
[2] Duke Univ, Dept Elect & Comp Engn, Durham, NC 27708 USA
[3] Karlsruhe Inst Technol KIT, Commun Engn Lab, D-76131 Karlsruhe, Germany
基金
瑞典研究理事会;
关键词
Iterative decoding; Maximum likelihood decoding; Belief propagation; Quantization (signal); Task analysis; Optimization; Neural networks; deep learning; min-sum decoding; neural decoders; pruning; quantization; CODES;
D O I
10.1109/JSAC.2020.3041392
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We consider near maximum-likelihood (ML) decoding of short linear block codes. In particular, we propose a novel decoding approach based on neural belief propagation (NBP) decoding recently introduced by Nachmani et al. in which we allow a different parity-check matrix in each iteration of the algorithm. The key idea is to consider NBP decoding over an overcomplete parity-check matrix and use the weights of NBP as a measure of the importance of the check nodes (CNs) to decoding. The unimportant CNs are then pruned. In contrast to NBP, which performs decoding on a given fixed parity-check matrix, the proposed pruning-based neural belief propagation (PB-NBP) typically results in a different parity-check matrix in each iteration. For a given complexity in terms of CN evaluations, we show that PB-NBP yields significant performance improvements with respect to NBP. We apply the proposed decoder to the decoding of a Reed-Muller code, a short low-density parity-check (LDPC) code, and a polar code. PB-NBP outperforms NBP decoding over an overcomplete parity-check matrix by 0.27-0.31 dB while reducing the number of required CN evaluations by up to 97%. For the LDPC code, PB-NBP outperforms conventional belief propagation with the same number of CN evaluations by 0.52 dB. We further extend the pruning concept to offset min-sum decoding and introduce a pruning-based neural offset min-sum (PB-NOMS) decoder, for which we jointly optimize the offsets and the quantization of the messages and offsets. We demonstrate performance 0.5 dB from ML decoding with 5-bit quantization for the Reed-Muller code.
引用
收藏
页码:1957 / 1966
页数:10
相关论文
共 26 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
[Anonymous], 2015, SHORT BLOCK LENGTH L
[3]  
[Anonymous], 2013, CoRR abs/1308.3432
[4]  
[Anonymous], 2010, ICML
[5]  
Bardet M, 2016, IEEE INT SYMP INFO, P230, DOI 10.1109/ISIT.2016.7541295
[6]   HARD-DECISION AND SOFT-DECISION DECODING BEYOND THE HALF MINIMUM DISTANCE - AN ALGORITHM FOR LINEAR CODES [J].
BOSSERT, M ;
HERGERT, F .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1986, 32 (05) :709-714
[7]  
Buchberger A., 2020, SOURCE CODE
[8]   Density evolution for two improved BP-based decoding algorithms of LDPC codes [J].
Chen, JH ;
Fossorier, MPC .
IEEE COMMUNICATIONS LETTERS, 2002, 6 (05) :208-210
[9]   Reduced complexity iterative decoding of low-density parity check codes based on belief propagation [J].
Fossorier, MPC ;
Mihaljevic, M ;
Imai, H .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1999, 47 (05) :673-680
[10]  
Frankle J., 2019, P INT C LEARN REPR N, P1