RNN Architecture Learning with Sparse Regularization

被引:0
作者
Dodge, Jesse [1 ]
Schwartz, Roy [2 ,3 ]
Peng, Hao [3 ]
Smith, Noah A. [2 ,3 ]
机构
[1] Carnegie Mellon Univ, Language Technol Inst, Pittsburgh, PA 15213 USA
[2] Allen Inst Artificial Intelligence, Seattle, WA USA
[3] Univ Washington, Paul G Allen Sch Comp Sci & Engn, Seattle, WA 98195 USA
来源
2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE | 2019年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural models for NLP typically use large numbers of parameters to reach state-of-theart performance, which can lead to excessive memory usage and increased runtime. We present a structure learning method for learning sparse, parameter-efficient NLP models. Our method applies group lasso to rational RNNs (Peng et al., 2018), a family of models that is closely connected to weighted finitestate automata (WFSAs). We take advantage of rational RNNs' natural grouping of the weights, so the group lasso penalty directly removes WFSA states, substantially reducing the number of parameters in the model. Our experiments on a number of sentiment analysis datasets, using both GloVe and BERT embeddings, show that our approach learns neural structures which have fewer parameters without sacrificing performance relative to parameter-rich baselines. Our method also highlights the interpretable properties of rational RNNs. We show that sparsifying such models makes them easier to visualize, and we present models that rely exclusively on as few as three WFSAs after pruning more than 90% of the weights. We publicly release our code.(1)
引用
收藏
页码:1179 / 1184
页数:6
相关论文
共 34 条
[1]  
[Anonymous], 2013, Foundations and Trends in Optimization
[2]  
[Anonymous], 2006, J ROYAL STAT SOC B
[3]  
[Anonymous], 2018, P CVPR
[4]  
[Anonymous], 2007, P 45 ANN M ASS COMP
[5]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[6]   STATISTICAL INFERENCE FOR PROBABILISTIC FUNCTIONS OF FINITE STATE MARKOV CHAINS [J].
BAUM, LE ;
PETRIE, T .
ANNALS OF MATHEMATICAL STATISTICS, 1966, 37 (06) :1554-&
[7]  
Bradbury J., 2017, INT C LEARN REPR ICL
[8]  
Cho K, 2014, ARXIV14061078
[9]   Grammatical inference: learning automata and grammars [J].
Daelemans, Walter .
MACHINE TRANSLATION, 2010, 24 (3-4) :291-293
[10]  
Devlin J., 2018, ARXIV