Latent Log-Linear Models for Handwritten Digit Classification

被引:13
作者
Deselaers, Thomas [1 ,5 ]
Gass, Tobias [2 ,5 ]
Heigold, Georg [3 ,5 ]
Ney, Hermann [4 ,5 ]
机构
[1] Google Switzerland, CH-8002 Zurich, Switzerland
[2] ETH, Comp Vis Lab, CH-8092 Zurich, Switzerland
[3] Google Inc, Mountain View, CA 94043 USA
[4] Rhein Westfal TH Aachen, Lehrstuhl Informat 6, D-52056 Aachen, Germany
[5] Univ Aachen, RWTH, Dept Comp Sci, Human Language Technol & Pattern Recognit Grp, Aachen, Germany
关键词
Log-linear models; latent variables; conditional random fields; OCR; image classification; RECOGNITION;
D O I
10.1109/TPAMI.2011.218
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
引用
收藏
页码:1105 / 1117
页数:13
相关论文
共 38 条
[1]  
Anderson J.A., 1982, Handbook of Statistics 2: Classification, Pattern Recognition and Reduction of Dimensionality, P169, DOI 10.1016/S0169-7161(82)02010-0
[2]  
[Anonymous], 2001, ICML 01 P 18 INT C M
[3]  
[Anonymous], P IEEE C COMP VIS PA
[4]   APPROXIMATING EXPONENTIAL MODELS [J].
BARNDORFFNIELSEN, OE ;
JUPP, PE .
ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 1989, 41 (02) :247-267
[5]  
Bender O., 2003, P 7 C NAT LANG LEARN, V4, P148, DOI DOI 10.3115/1119176.1119196
[6]  
Bezdek J. C., 2003, Neural, Parallel & Scientific Computations, V11, P351
[7]   Choosing multiple parameters for support vector machines [J].
Chapelle, O ;
Vapnik, V ;
Bousquet, O ;
Mukherjee, S .
MACHINE LEARNING, 2002, 46 (1-3) :131-159
[8]   GENERALIZED ITERATIVE SCALING FOR LOG-LINEAR MODELS [J].
DARROCH, JN ;
RATCLIFF, D .
ANNALS OF MATHEMATICAL STATISTICS, 1972, 43 (05) :1470-&
[9]   Training invariant support vector machines [J].
Decoste, D ;
Schölkopf, B .
MACHINE LEARNING, 2002, 46 (1-3) :161-190
[10]   Object Detection with Discriminatively Trained Part-Based Models [J].
Felzenszwalb, Pedro F. ;
Girshick, Ross B. ;
McAllester, David ;
Ramanan, Deva .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (09) :1627-1645