ConCave-Convex procedure for support vector machines with Huber loss for text classification

被引:0
作者
Borah, Parashjyoti [1 ]
Gupta, Deepak [2 ]
Hazarika, Barenya Bikash [3 ]
机构
[1] Indian Inst Informat Technol Guwahati Bongora, Dept Comp Sci & Engn, Gauhati 781015, Assam, India
[2] Motilal Nehru Natl Inst Technol Allahabad, Dept Comp Sci & Engn, Prayagraj 211004, Uttar Pradesh, India
[3] Assam Town Univ, Fac Comp Technol, Sankar Madhab Path,Gandhinagar, Gauhati 781026, Assam, India
关键词
Support vector machine; Hinge loss; ConCave-Convex procedure; Ramp loss function; Huber loss functions; REGRESSION; CLASSIFIERS; ALGORITHM;
D O I
10.1016/j.compeleceng.2024.109925
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The classical support vector machine (SVM) adopts the linear Hinge loss whereas the least squares SVM (LS-SVM) employs the quadratically growing least squares loss function. The robust Ramp loss function is employed in Ramp loss SVM (RSVM) that truncates the Hinge loss function and becomes flat a specified point afterwards, thus, increases robustness to outliers. Recently proposed SVM with pinball loss (pin-SVM) utilizes pinball loss function that maximizes the margin between the class hyperplanes based on quantile distance. Huber loss function is the generalization of linear Hinge loss and quadratic loss. Huber loss solves sensitivity issues of least squares loss to noise and outlier. In this work, we employ the robust Huber loss function for SVM classification for improved generalization performance. The cost function of the proposed approach consists of one convex and one non-convex part, which might sometimes provide local optimum solution instead of a global optimum. We suggest a ConCave-Convex Procedure (CCCP) to resolve this issue. Additionally, the proximal cost is scaled for each class sample based on their class size to reduce the effect of the class imbalance problem. Thus, it can be claimed that the proposed approach incorporates class imbalance learning as well. Extensive experimental analysis establishes efficacy of the proposed method. Furthermore, a sequential minimal optimization (SMO) procedure for high dimensional HSVM is proposed and its performance is tested on two text classification datasets.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] Nonlinear Regularization Path for the Modified Huber loss Support Vector Machines
    Karasuyama, Masayuki
    Takeuchi, Ichiro
    2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [2] Iterative decoding based on the concave-convex procedure
    Shibuya, T
    Harada, K
    Tohyama, R
    Sakaniwa, K
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2005, E88A (05): : 1346 - 1364
  • [3] A Concave-Convex Procedure for TDOA Based Positioning
    Gholami, Mohammad Reza
    Gezici, Sinan
    Strom, Erik G.
    IEEE COMMUNICATIONS LETTERS, 2013, 17 (04) : 765 - 768
  • [4] Convex and concave hulls for classification with support vector machine
    Lopez Chau, Asdrubal
    Li, Xiaoou
    Yu, Wen
    NEUROCOMPUTING, 2013, 122 : 198 - 209
  • [5] Sparse functional linear models via calibrated concave-convex procedure
    Lee, Young Joo
    Jeon, Yongho
    JOURNAL OF THE KOREAN STATISTICAL SOCIETY, 2024, 53 (01) : 189 - 207
  • [6] Robust twin support vector regression based on Huber loss function
    Balasundaram, S.
    Prasad, Subhash Chandra
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (15) : 11285 - 11309
  • [7] Slow cortical potential signal classification using concave-convex feature
    Hou, Huirang
    Sun, Biao
    Meng, Qinghao
    JOURNAL OF NEUROSCIENCE METHODS, 2019, 324
  • [8] Large data sets classification using convex–concave hull and support vector machine
    Asdrúbal López Chau
    Xiaoou Li
    Wen Yu
    Soft Computing, 2013, 17 : 793 - 804
  • [9] Scaling feature selection method for enhancing the classification performance of Support Vector Machines in text mining
    Manochandar, S.
    Punniyamoorthy, M.
    COMPUTERS & INDUSTRIAL ENGINEERING, 2018, 124 : 139 - 156
  • [10] Robust twin support vector regression based on Huber loss function
    S. Balasundaram
    Subhash Chandra Prasad
    Neural Computing and Applications, 2020, 32 : 11285 - 11309