Weighted Reward-Punishment Editing

被引:3
|
作者
Nanni, Loris [1 ]
Lumini, Alessandra [2 ]
Brahnam, Sheryl [3 ]
机构
[1] Univ Padua, DEI, Viale Gradenigo 6, Padua, Italy
[2] Univ Bologna, DISI, Via Sacchi 3, I-47521 Cesena, Italy
[3] Missouri State Univ, Comp Informat Syst, 901 S Natl, Springfield, MO 65804 USA
关键词
Pattern editing; Nearest Neighbor classifier; Machine learning; Ensemble; NEAREST; REDUCTION; ALGORITHMS; PATTERNS; RULE;
D O I
10.1016/j.patrec.2016.03.011
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Nearest Neighbor classifier is a popular nonparametric classification method that has been successfully applied to many pattern recognition problems. Its usefulness has been limited, however, because of its computational complexity and sensitivity to outliers in the training set. Computational complexity is becoming less of an issue thanks to the availability of inexpensive memory and high processing speeds. To overcome the second limitation, sensitivity to outliers in the training set, researchers have developed editing and condensing techniques that are aimed at selecting a proper set of prototypes from the training set. In this work, we propose a new editing technique based on the idea of rewarding those patterns that make a contribution to a correct classification while punishing those patterns that provide an incorrect classification. This criterion is coupled with an approach for selecting the edited patterns based on the minimization of a criteria index related to the distances in the training patterns and on the calculation of edge and border patterns for determining the number of edited patterns. Extensive experiments were conducted across several classification problems that evaluated both the efficacy of the proposed technique with respect to other editing approaches and the advantages of using our proposed editing technique either in combination with condensing techniques or as a method utilized in the preprocessing stage. Moreover, the proposed editing approach is shown to be particularly effective when combined with a support vector machine (SVM) classifier. The MATLAB code used in the proposed paper will be available at https://www.dei.unipd.it/node/2357. (C) 2016 Elsevier B.V. All rights reserved.
引用
收藏
页码:48 / 54
页数:7
相关论文
共 7 条
  • [1] Data pre-processing through reward-punishment editing
    Franco, Annalisa
    Maltoni, Davide
    Nanni, Loris
    PATTERN ANALYSIS AND APPLICATIONS, 2010, 13 (04) : 367 - 381
  • [2] On algorithmic collusion and reward-punishment schemes
    Epivent, Andrea
    Lambin, Xavier
    ECONOMICS LETTERS, 2024, 237
  • [3] Data pre-processing through reward–punishment editing
    Annalisa Franco
    Davide Maltoni
    Loris Nanni
    Pattern Analysis and Applications, 2010, 13 : 367 - 381
  • [4] Reward and punishment act as distinct factors in guiding behavior
    Kubanek, Jan
    Snyder, Lawrence H.
    Abrams, Richard A.
    COGNITION, 2015, 139 : 154 - 167
  • [5] Machine learning reveals differential effects of depression and anxiety on reward and punishment processing
    Grabowska, Anna
    Zabielski, Jakub
    Senderecka, Magdalena
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [6] A Reward-and-Punishment-Based Approach for Concept Detection Using Adaptive Ontology Rules
    Bhatt, Chidansh A.
    Atrey, Pradeep K.
    Kankanhalli, Mohan S.
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2013, 9 (02)
  • [7] You want me, but I no longer want you. The punishment-reward mechanism and institutional quality in Italian regions
    Ippolito, Marzia
    Ercolano, Salvatore
    Cicatiello, Lorenzo
    CONTEMPORARY ITALIAN POLITICS, 2025, 17 (01) : 23 - 38