Combining Active and Semisupervised Learning of Remote Sensing Data Within a Renyi Entropy Regularization Framework

被引:17
作者
Polewski, Przemyslaw [1 ]
Yao, Wei [1 ]
Heurich, Marco [2 ]
Krzystek, Peter [1 ]
Stilla, Uwe [3 ]
机构
[1] Munich Univ Appl Sci, Dept Geoinformat, D-80333 Munich, Germany
[2] Bavarian Forest Natl Pk, Dept Res & Documentat, D-94481 Grafenau, Germany
[3] Tech Univ Munich, Photogrammetry & Remote Sensing, D-80333 Munich, Germany
关键词
Active learning; dead tree detection; Renyi entropy; semisupervised learning; IMAGE CLASSIFICATION; ALGORITHM; INFORMATION;
D O I
10.1109/JSTARS.2015.2510867
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Active and semisupervised learning are related techniques aiming at reducing the effort of creating training sets for classification and regression tasks. In this work, we present a framework for combining these two techniques on the basis of Renyi entropy regularization, enabling a synergy effect. We build upon the existing semisupervised learning model which attempts to balance the likelihood of labeled examples and the entropy of putative object probabilities within the unlabeled pool. To enable efficient optimization of the model, we generalize the deterministic annealing expectation-maximization (DAEM) algorithm, originally designed for Shannon entropy, to accommodate the use of Renyi entropies. The Renyi-regularized model is then applied to expected error reduction (EER), an active learning approach based on minimizing the entropy of unlabeled object probabilities. We investigate object preselection with a greedy approximation of the object feature matrix as a means to reduce computational complexity. To assess the performance of the proposed framework, we apply it to two real-world remote sensing problems with significantly different input data characteristics: detecting dead trees from color infrared aerial images (2-D) and detecting dead trunk stems in ALS point clouds (3-D). Our results show that for small training sets, the semisupervised Renyi-regularized classifier improves the classification rate by up to 11% and 10% points compared to the unregularized baseline for ALS and image data, respectively. This gain carries over to active learning, where the regularized EER achieves 90% of the final classification performance using 50% and 70% of the number of queries required by standard EER.
引用
收藏
页码:2910 / 2922
页数:13
相关论文
共 33 条
[1]  
[Anonymous], 2001, ICML Williamstown
[2]  
[Anonymous], THESIS
[3]  
[Anonymous], 2006, BOOK REV IEEE T NEUR
[4]  
[Anonymous], 1994, ICML, DOI DOI 10.1016/B978-1-55860-335-6.50026-X
[5]  
[Anonymous], 2009, ACTIVE LEARNING LIT
[6]  
[Anonymous], 2000, P 17 INT C MACHINE L
[7]  
[Anonymous], 2003, ICML 2003 WORKSHOP C
[8]  
Babaee M., 2015, IEEE J SEL IN PRESS
[9]  
Bengtsson I., 2017, GEOMETRY QUANTUM STA, DOI DOI 10.1017/9781139207010
[10]  
Fedorov V.V., 1972, Theory of Optimal Experiments