Quick attribute reduction with generalized indiscernibility models

被引:42
作者
Fan Jing [1 ,2 ]
Jiang Yunliang [3 ]
Liu Yong [1 ,2 ]
机构
[1] Zhejiang Univ, Inst Cyber Syst & Control, Hangzhou 310027, Zhejiang, Peoples R China
[2] Zhejiang Univ, State Key Lab Ind Control & Technol, Hangzhou 310027, Zhejiang, Peoples R China
[3] Huzhou Univ, Huzhou 313000, Peoples R China
关键词
Generalized indiscernibility relation; Attribute reduction; Granular structure; Acceleration policy; FUZZY-ROUGH SETS; FEATURE-SELECTION; APPROXIMATIONS; OPERATORS;
D O I
10.1016/j.ins.2017.02.032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The efficiency of attribute reduction is one of the important challenges being faced in the field of Big Data processing. Although many quick attribute reduction algorithms have been proposed. they are tightly coupled with their corresponding indiscernibility relations, and it is difficult to extend specific acceleration policies to other reduction models. In this paper, we propose a generalized indiscernibility reduction model(GIRM) and a concept of the granular structure in GIRM, which is a quantitative measurement induced from multiple indiscernibility relations and which can be used to represent the computation cost of varied models. Then, we prove that our GIRM is compatible with three typical reduction models. Based on the proposed GIRM, we present a generalized attribute reduction algorithm and a generalized positive region computing algorithm. We perform a quantitative analysis of the computation complexities of two algorithms using the granular structure. For the generalized attribute reduction, we present systematic acceleration policies that can reduce the computational domain and optimize the computation of the positive region. Based on the granular structure, we propose acceleration policies for the computation of the generalized positive region, and we also propose fast positive region computation approaches for three typical reduction models. Experimental results for various datasets prove the efficiency of our acceleration policies in those three typical reduction models. (C) 2017 Elsevier Inc. All rights reserved.
引用
收藏
页码:15 / 36
页数:22
相关论文
共 61 条
[1]  
[Anonymous], P 7 INT C MACH LEARN
[2]  
[Anonymous], 1998, Rough Sets in Knowledge Discovery
[3]  
[Anonymous], 1992, Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets Theory, DOI DOI 10.1007/978-94-015-7975-9_21
[4]  
[Anonymous], 1998, FEATURE EXTRACTION C
[5]   On fuzzy-rough sets approach to feature selection [J].
Bhatt, RB ;
Gopal, M .
PATTERN RECOGNITION LETTERS, 2005, 26 (07) :965-975
[6]   A new approach to attribute reduction of consistent and inconsistent covering decision systems with covering rough sets [J].
Chen Degang ;
Wang Changzhong ;
Hu Qinghua .
INFORMATION SCIENCES, 2007, 177 (17) :3500-3518
[7]   A Rough-Set-Based Incremental Approach for Updating Approximations under Dynamic Maintenance Environments [J].
Chen, Hongmei ;
Li, Tianrui ;
Ruan, Da ;
Lin, Jianhui ;
Hu, Chengxiang .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2013, 25 (02) :274-284
[8]   Construction of rough approximations in fuzzy setting [J].
Chen, Xueyou ;
Li, Qingguo .
FUZZY SETS AND SYSTEMS, 2007, 158 (23) :2641-2653
[9]   Rough set-aided keyword reduction for text categorization [J].
Chouchoulas, A ;
Shen, Q .
APPLIED ARTIFICIAL INTELLIGENCE, 2001, 15 (09) :843-873
[10]   Consistency-based search in feature selection [J].
Dash, M ;
Liu, HA .
ARTIFICIAL INTELLIGENCE, 2003, 151 (1-2) :155-176