Recurrent Reconstructive Network for Sequential Anomaly Detection

被引:23
作者
Yoo, Yong-Ho [1 ]
Kim, Ue-Hwan [1 ]
Kim, Jong-Hwan [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon 34141, South Korea
关键词
Anomaly detection; Data models; Training; Decoding; Predictive models; Feature extraction; Adaptation models; attention mechanism; recon struction model; recurrent autoencoder (RAE);
D O I
10.1109/TCYB.2019.2933548
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Anomaly detection identifies anomaly samples that deviate significantly from normal patterns. Usually, the number of anomaly samples is extremely small compared to the normal samples. To handle such imbalanced sample distribution, one-class classification has been widely used in identifying the anomaly by modeling the features of normal data using only normal data. Recently, recurrent autoencoder (RAE) has shown outstanding performance in the sequential anomaly detection compared to the other conventional methods. However, RAE, which has a long-term dependency problem, is optimized only to handle the fixed-length inputs. To overcome the limitations of RAE, we propose recurrent reconstructive network (RRN) as a novel RAE, with three functionalities for anomaly detection of streaming data: 1) a self-attention mechanism; 2) hidden state forcing; and 3) skip transition. The designed self-attention mechanism and the hidden state forcing between the encoder and decoder effectively manage the input sequences of varying length. The skip transition with the attention gate improves the reconstruction performance. We conduct a series of comprehensive experiments on four datasets and verify the superior performance of the proposed RRN in the sequential anomaly detection tasks.
引用
收藏
页码:1704 / 1715
页数:12
相关论文
共 38 条
  • [1] Amer M., 2013, P C OUTL DET DESCR C, P8
  • [2] Annam J.R., 2011, INDIA C INDICON 2011, P1, DOI DOI 10.1109/INDCON.2011.6139394
  • [3] [Anonymous], 2016, PROC 9 ISCA SPEEC
  • [4] [Anonymous], 2016, RECURRENT HIGHWAY NE
  • [5] [Anonymous], 2018, P INT C MACH LEARN
  • [6] [Anonymous], 2016, ABS160700148 CORR
  • [7] [Anonymous], 2014, ADV NIPS
  • [8] [Anonymous], 2014, UNSUPERVISED DOMAIN
  • [9] [Anonymous], 2016, ADV NEURAL INFORM PR
  • [10] [Anonymous], 2000, P IEEE SYST MAN CYB