Where are linear feature extraction methods applicable?

被引:109
作者
Martínez, AM [1 ]
Zhu, ML [1 ]
机构
[1] Ohio State Univ, Dept Elect & Comp Engn, Columbus, OH 43210 USA
关键词
feature extraction; generalized eigenvalue decomposition; performance evaluation; classifiers; pattern recognition;
D O I
10.1109/TPAMI.2005.250
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A fundamental problem in computer vision and pattern recognition is to determine where and, most importantly, why a given technique is applicable. This is not only necessary because it helps us decide which techniques to apply at each given time. Knowing why current algorithms cannot be applied facilitates the design of new algorithms robust to such problems. In this paper, we report on a theoretical study that demonstrates where and why generalized eigen-based linear equations do not work. In particular, we show that when the smallest angle between the ith eigenvector given by the metric to be maximized and the first i eigenvectors given by the metric to be minimized is close to zero, our results are not guaranteed to be correct. Several properties of such models are also presented. For illustration, we concentrate on the classical applications of classification and feature extraction. We also show how we can use our findings to design more robust algorithms. We conclude with a discussion on the broader impacts of our results.
引用
收藏
页码:1934 / 1944
页数:11
相关论文
共 36 条