Towards Robust and Discriminative Sequential Data Learning: When and How to Perform Adversarial Training?

被引:8
作者
Jia, Xiaowei [1 ,3 ]
Li, Sheng [2 ]
Zhao, Handong [3 ]
Kim, Sungchul [3 ]
Kumar, Vipin [1 ]
机构
[1] Univ Minnesota, Dept Comp Sci & Engn, Minneapolis, MN 55455 USA
[2] Univ Georgia, Dept Comp Sci, Athens, GA 30602 USA
[3] Adobe Res, San Jose, CA 95110 USA
来源
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING | 2019年
基金
美国国家科学基金会;
关键词
Adversarial training; sequential data; attention mechanism;
D O I
10.1145/3292500.3330957
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The last decade has witnessed a surge of interest in applying deep learning models for discovering sequential patterns from a large volume of data. Recent works show that deep learning models can be further improved by enforcing models to learn a smooth output distribution around each data point. This can be achieved by augmenting training data with slight perturbations that are designed to alter model outputs. Such adversarial training approaches have shown much success in improving the generalization performance of deep learning models on static data, e.g., transaction data or image data captured on a single snapshot. However, when applied to sequential data, the standard adversarial training approaches cannot fully capture the discriminative structure of a sequence. This is because real-world sequential data are often collected over a long period of time and may include much irrelevant information to the classification task. To this end, we develop a novel adversarial training approach for sequential data classification by investigating when and how to perturb a sequence for an effective data augmentation. Finally, we demonstrate the superiority of the proposed method over baselines in a diversity of real-world sequential datasets.
引用
收藏
页码:1665 / 1673
页数:9
相关论文
共 47 条
[1]  
[Anonymous], 2015, P ICLR
[2]  
[Anonymous], 2012, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2, ACL '12
[3]  
[Anonymous], 2018, MODIS VEGETATION IND
[4]  
[Anonymous], C KNOWL DISC DAT MIN
[5]  
[Anonymous], 2017, ARXIV171109404
[6]  
[Anonymous], 2018, ARXIV181105544
[7]  
[Anonymous], 2016, Understanding neural networks through representation erasure
[8]  
Chen Charles, 2018, P 2018 CIKM
[9]  
Choi Edward, 2017, P 23 ACM SIGKDD
[10]  
Dai Andrew M, 2015, Adv. Neural Inf. Process. Syst