The Significant Matrix without Genetic Algorithm for The Feature Selection (Significant Matrix 2)

被引:0
作者
Chuasuwan, Ekapong [1 ]
机构
[1] Chiang Rai Collage, Fac Engn, Dept Comp Engn, Chiang Rai, Thailand
来源
2014 FOURTH JOINT INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONIC AND ELECTRICAL ENGINEERING (JICTEE 2014) | 2014年
关键词
component; Feature Selection; Decision Tree; Significant Matrix; Significant Matrix2;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper presents to the improvement of the Significant Matrix [1] that works along with Genetic Algorithm in feature selection of appropriate data for a decision tree structure. This work proposes the reduction of time that cut off the Genetic Algorithm's work times. The new method is proposed in the name "Significant Matrix 2" which is calculated from the relationship between categorical data and a class label for determining the threshold of the feature selection and the sub-dataset from the method contains the appropriate feature to create decision trees. The results of experiment of feature selection times. The proposed work can work faster than [1], average 28 times and the results of experiments of the decision tree model is constructed from the feature of the method and model of neural networks. The proposed work gives the average accuracy of the classification at 95.9% of the 11 sample database, also a number of the data features are less than a number of the features from the method of neural networks [6] that uses the feature only 48.08% from all feature in example dataset. Furthermore, when comparing the accuracy of the classification decision tree which another feature selected method. This proposed work have the amount of average accuracy higher than the selected data from another method. Experimental results show that the proposed method does not only provide a higher accuracy, but reduce the complexity by using less features of the dataset.
引用
收藏
页数:5
相关论文
共 50 条
  • [41] GENETIC ALGORITHM BASED FEATURE SELECTION FOR PARAPHRASE RECOGNITION
    Chitra, A.
    Rajkumar, Anupriya
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2013, 22 (02)
  • [42] Enhancing the Diversity of Genetic Algorithm for Improved Feature Selection
    AlSukker, Akram
    Khushaba, Rami N.
    Al-Ani, Ahmed
    IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010,
  • [43] A new and fast rival genetic algorithm for feature selection
    Jingwei Too
    Abdul Rahim Abdullah
    The Journal of Supercomputing, 2021, 77 : 2844 - 2874
  • [44] Feature selection using genetic algorithm and cluster validation
    Wu, Yi-Leh
    Tang, Cheng-Yuan
    Hor, Maw-Kae
    Wu, Pei-Fen
    EXPERT SYSTEMS WITH APPLICATIONS, 2011, 38 (03) : 2727 - 2732
  • [45] Sparse Matrix Feature Selection in Multi-label Learning
    Yang, Wenyuan
    Zhou, Bufang
    Zhu, William
    ROUGH SETS, FUZZY SETS, DATA MINING, AND GRANULAR COMPUTING, RSFDGRC 2015, 2015, 9437 : 332 - 339
  • [46] Unsupervised feature selection based on matrix factorization and adaptive graph
    Cao L.
    Lin X.
    Su S.
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2021, 43 (08): : 2197 - 2208
  • [47] Subspace learning for unsupervised feature selection via matrix factorization
    Wang, Shiping
    Pedrycz, Witold
    Zhu, Qingxin
    Zhu, William
    PATTERN RECOGNITION, 2015, 48 (01) : 10 - 19
  • [48] MMMF: Multimodal Multitask Matrix Factorization for Classification and Feature Selection
    Hwang, Jeongyoung
    Lee, Hyunju
    IEEE ACCESS, 2022, 10 : 120155 - 120167
  • [49] A Novel Genetic Algorithm Approach to Simultaneous Feature Selection and Instance Selection
    Albuquerque, Inti Mateus Resende
    Bach Hoai Nguyen
    Xue, Bing
    Zhang, Mengjie
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 616 - 623
  • [50] Matrix Condition Number Prediction with SVM Regression and Feature Selection
    Xu, Shuting
    Zhang, Jun
    PROCEEDINGS OF THE FIFTH SIAM INTERNATIONAL CONFERENCE ON DATA MINING, 2005, : 491 - 495