Fuzzy Rough Attribute Reduction Based on Fuzzy Implication Granularity Information

被引:31
作者
Dai, Jianhua [1 ]
Zhu, Zhilin [1 ]
Zou, Xiongtao [1 ]
机构
[1] Hunan Normal Univ, Coll Informat Sci & Engn, Hunan Prov Key Lab Intelligent Comp & Language Inf, Changsha 410081, Peoples R China
基金
中国国家自然科学基金;
关键词
Rough sets; Entropy; Information entropy; Task analysis; Fuzzy systems; Data models; Computational modeling; Attribute reduction; fuzzy implication granularity information (FIGI); fuzzy rough set; granular computing; KNOWLEDGE GRANULATION; ENTROPY MEASURES; SELECTION; APPROXIMATION; SETS;
D O I
10.1109/TFUZZ.2024.3381993
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fuzzy rough set model is a powerful tool for handling attribute reduction tasks for complex data. While the fuzzy rough set model commonly employs fuzzy information entropy to measure attribute uncertainty, utilizing fuzzy conditional information entropy for measuring attribute relationships presents a drawback due to its lack of monotonicity, impacting attribute reduction results. Furthermore, entropy computations involve numerous logarithmic function computations, resulting in a significant computational burden. Moreover, the results obtained from logarithmic functions are unbounded. To address these problems, this article presents the concept of fuzzy implication granularity information (FIGI) for measuring attribute information. In addition, we introduce several related generalizations, such as fuzzy conditional implication granularity information (FCIGI), fuzzy mutual implication granularity information, and fuzzy joint implication granularity information, aiming to measure the relationships between attributes. Notably, the introduced FCIGI to measure the relationship between attributes demonstrates the desirable property of monotonicity. Crucially, all the metrics proposed in this article are bounded, ensuring that computed values within the range of 0 to 1. Finally, we propose a forward greedy attribute reduction algorithm based on the monotonic fuzzy conditional implication granularity information (MFIGI), and the performance of our MFIGI algorithm was compared against six different attribute reduction algorithms using three classifiers across 15 different datasets, the experimental results demonstrate the excellence of our MFIGI algorithm.
引用
收藏
页码:3741 / 3752
页数:12
相关论文
共 50 条
[1]   A Novel Algorithm for Finding Reducts With Fuzzy Rough Sets [J].
Chen, Degang ;
Zhang, Lei ;
Zhao, Suyun ;
Hu, Qinghua ;
Zhu, Pengfei .
IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2012, 20 (02) :385-389
[2]   Novel multi-label feature selection via label symmetric uncertainty correlation learning and feature redundancy evaluation [J].
Dai, Jianhua ;
Chen, Jiaolong ;
Liu, Ye ;
Hu, Hu .
KNOWLEDGE-BASED SYSTEMS, 2020, 207
[3]   Feature selection via normative fuzzy information weight with application into tumor classification [J].
Dai, Jianhua ;
Chen, Jiaolong .
APPLIED SOFT COMPUTING, 2020, 92
[4]   Maximal-Discernibility-Pair-Based Approach to Attribute Reduction in Fuzzy Rough Sets [J].
Dai, Jianhua ;
Hu, Hu ;
Wu, Wei-Zhi ;
Qian, Yuhua ;
Huang, Debiao .
IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2018, 26 (04) :2174-2187
[5]   Neighbor Inconsistent Pair Selection for Attribute Reduction by Rough Set Approach [J].
Dai, Jianhua ;
Hu, Qinghua ;
Hu, Hu ;
Huang, Debiao .
IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2018, 26 (02) :937-950
[6]   Attribute Selection for Partially Labeled Categorical Data By Rough Set Approach [J].
Dai, Jianhua ;
Hu, Qinghua ;
Zhang, Jinghong ;
Hu, Hu ;
Zheng, Nenggan .
IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (09) :2460-2471
[7]   Entropy measures and granularity measures for set-valued information systems [J].
Dai, Jianhua ;
Tian, Haowei .
INFORMATION SCIENCES, 2013, 240 :72-82
[8]   Fuzzy rough set model for set-valued data [J].
Dai, Jianhua ;
Tian, Haowei .
FUZZY SETS AND SYSTEMS, 2013, 229 :54-68
[9]   Attribute selection based on information gain ratio in fuzzy rough set theory with application to tumor classification [J].
Dai, Jianhua ;
Xu, Qing .
APPLIED SOFT COMPUTING, 2013, 13 (01) :211-221
[10]  
Demsar J, 2006, J MACH LEARN RES, V7, P1