We present a technique for dimension reduction. The technique uses a gradient descent approach to attempt to sequentially find orthogonal vectors such that when the data is projected onto each vector the classification error is minimised. We make no assumptions about the structure of the data and the technique is independent of the classifier model used. Our approach has advantages over other dimensionality reduction techniques, such as Linear Discriminant Analysis (LDA), which assumes unimodal gaussian distributions, and Principal Component Analysis (PCA) which is ignorant of class labels. In this paper we present the results of a comparison of our technique with PCA and LDA when applied to various 2-dimensional distributions and the two class cancer diagnosis task from the Wisconsin Diagnostic Breast Cancer Database, which contains 30 features.
机构:
Univ So Calif, Los Angeles, CA 90089 USA
Univ Edinburgh, IPAB, Sch Informat, Edinburgh EH9 3JZ, Midlothian, ScotlandUniv So Calif, Los Angeles, CA 90089 USA
Hoffmann, Heiko
Schaal, Stefan
论文数: 0引用数: 0
h-index: 0
机构:
Univ So Calif, Los Angeles, CA 90089 USAUniv So Calif, Los Angeles, CA 90089 USA
Schaal, Stefan
Vijayakumar, Sethu
论文数: 0引用数: 0
h-index: 0
机构:
Univ Edinburgh, IPAB, Sch Informat, Edinburgh EH9 3JZ, Midlothian, ScotlandUniv So Calif, Los Angeles, CA 90089 USA