Fractional Gradient Descent Optimizer for Linear Classifier Support Vector Machine

被引:4
作者
Hapsari, Dian Puspita [1 ]
Utoyo, Imam [2 ]
Purnami, Santi Wulan [3 ]
机构
[1] Inst Teknol Adhi Tama, Dept Informat Engn, Surabaya, Indonesia
[2] Univ Airlangga, Dept Math, Surabaya, Indonesia
[3] Inst Teknol Sepuluh November, Dept Math, Surabaya, Indonesia
来源
2020 THIRD INTERNATIONAL CONFERENCE ON VOCATIONAL EDUCATION AND ELECTRICAL ENGINEERING (ICVEE): STRENGTHENING THE FRAMEWORK OF SOCIETY 5.0 THROUGH INNOVATIONS IN EDUCATION, ELECTRICAL, ENGINEERING AND INFORMATICS ENGINEERING | 2020年
关键词
Support Vector Machine; Fractional Gradient Descent; Optimization; Supervised learning; Data Mining; SVM; EXTRACTION;
D O I
10.1109/icvee50212.2020.9243288
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Supervised learning is one of the activities in data mining that aims to classify or predict data. One of the powerful supervised learning algorithms is the Support Vector Machine which is included in the linear classifier. In data prediction activities, efforts are needed to improve the accuracy of predictions by optimizing parameters in the classification algorithm. In this study, the proposed Fractional Gradient Descent as an unconstraint optimization algorithm on objective functions in the SVM classifier. With Fractional Gradient Descent as an optimizer classification model in training data activities to progress the exactness of prediction models. Fractional Gradient Descent optimizes the SVM classification model using fractional values so that it has small steps with a small learning rate in the process of reaching global minimums, and achieving convergence with lower iterations. With a learning rate of 0.0001 SVM Classifier with fractional gradient descent have error rate = 0.273083, at learning rate 0.001 with error rate = 0.273070, and at learning rate 0.01 with error rate = 0.273134. The results of the SVM Classifier with stochastic gradient descent optimization reach the convergence point at iteration 350. With fractional gradient descent optimization, it reaches a convergence point of 50 iterations smaller than the SVNI Classifier with stochastic gradient descent.
引用
收藏
页数:5
相关论文
共 20 条
  • [1] [Anonymous], 2018, ADV NEURAL INFORM PR
  • [2] Cutkosky A., 2018, ADV NEURAL INFORM PR
  • [3] High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning
    Erfani, Sarah M.
    Rajasegarar, Sutharshan
    Karunasekera, Shanika
    Leckie, Christopher
    [J]. PATTERN RECOGNITION, 2016, 58 : 121 - 134
  • [4] Fujimoto Y., 2017, RELATIONSHIP INTEGRA
  • [5] Lin YQ, 2011, PROC CVPR IEEE, P1689, DOI 10.1109/CVPR.2011.5995477
  • [6] Liu J., 2014, SCIENCEASIA
  • [7] New hybrid conjugate gradient method for unconstrained optimization
    Liu, J. K.
    Li, S. J.
    [J]. APPLIED MATHEMATICS AND COMPUTATION, 2014, 245 : 36 - 43
  • [8] Aspect term extraction for sentiment analysis in large movie reviews using Gini Index feature selection method and SVM classifier
    Manek, Asha S.
    Shenoy, P. Deepa
    Mohan, M. Chandra
    Venugopal, K. R.
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2017, 20 (02): : 135 - 154
  • [9] A geometric approach to support vector machine (SVM) classification
    Mavroforakis, Michael E.
    Theodoridis, Sergios
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (03): : 671 - 682
  • [10] Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks
    Pu, Yi-Fei
    Yi, Zhang
    Zhou, Ji-Liu
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (10) : 2319 - 2333