Scientific data lossless compression using fast neural network

被引:0
作者
Zhou, Jun-Lin [1 ]
Fu, Yan [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 610054, Sichuan, Peoples R China
来源
ADVANCES IN NEURAL NETWORKS - ISNN 2006, PT 1 | 2006年 / 3971卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Scientific computing generates huge loads of data from complex simulations, usually takes several TB, general compression methods can not have good performance on these data. Neural networks have the potential to extend data compression algorithms beyond the character level (n-gram model) currently in use, but have usually been avoided because they are too slow to be practical. We present a lossless compression method using fast neural network based on 'Maximum Entropy and arithmetic coder to succeed in the job. The compressor is a bit-level predictive arithmetic encoder using a 2 layer fast neural network to predict the probability distribution. In the training phase, an improved adaptive variable learning rate is optimized for fast convergence training. The proposed compressor produces better compression than popular compressors (bzip, zzip, lzo, ucl and dflate) on the lared-p data set, also is competitive in time and space for practical application.
引用
收藏
页码:1293 / 1298
页数:6
相关论文
共 6 条
[1]  
Berger AL, 1996, COMPUT LINGUIST, V22, P39
[2]   An input-delay neural-network-based approach for piecewise ECG signal compression [J].
Chatterjee, A ;
Nait-Ali, A ;
Siarry, P .
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2005, 52 (05) :945-947
[3]   Magnified gradient function with deterministic weight modification in adaptive learning [J].
Ng, SC ;
Cheung, CC ;
Leung, SH .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2004, 15 (06) :1411-1423
[4]  
Petalas YG, 2004, IEEE IJCNN, P1063
[5]   Sequential neural text compression [J].
Schmidhuber, J ;
Heil, S .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1996, 7 (01) :142-146
[6]  
YANG GW, 2003, TIEN TZU HSUEH PAO, V31, P728