Shallow2Deep: Restraining Neural Networks Opacity Through Neural Architecture Search

被引:4
作者
Agiollo, Andrea [1 ,2 ]
Ciatto, Giovanni [1 ]
Omicini, Andrea [1 ]
机构
[1] Univ Bologna, Alma Mater Studiorum, Cesena, Italy
[2] Electrolux Profess SpA, Res Hub, I-33170 Pordenone, PN, Italy
来源
EXPLAINABLE AND TRANSPARENT AI AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2021 | 2021年 / 12688卷
基金
欧盟地平线“2020”;
关键词
Neural Architecture Search; Evolutionary algorithm; Opacity; Interpretability;
D O I
10.1007/978-3-030-82017-6_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, the Deep Learning (DL) research community has focused on developing efficient and highly performing Neural Networks (NN). Meanwhile, the eXplainable AI (XAI) research community has focused on making Machine Learning (ML) and Deep Learning methods interpretable and transparent, seeking explainability. This work is a preliminary study on the applicability of Neural Architecture Search (NAS) (a sub-field of DL looking for automatic design of NN structures) in XAI. We propose Shallow2Deep, an evolutionary NAS algorithm that exploits local variability to restrain opacity of DL-systems through NN architectures simplification. Shallow2Deep effectively reduces NN complexity - therefore their opacity - while reaching state-of-the-art performances. Unlike its competitors, Shallow2Deep promotes variability of localised structures in NN, helping to reduce NN opacity. The proposed work analyses the role of local variability in NN architectures design, presenting experimental results that show how this feature is actually desirable.
引用
收藏
页码:63 / 82
页数:20
相关论文
共 50 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
[Anonymous], 2017, Master's Thesis
[3]  
[Anonymous], 2015, Understanding Neural Networks Through Deep Visualization
[4]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[5]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[6]   Network Dissection: Quantifying Interpretability of Deep Visual Representations [J].
Bau, David ;
Zhou, Bolei ;
Khosla, Aditya ;
Oliva, Aude ;
Torralba, Antonio .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :3319-3327
[7]  
Cai Han, 2019, INT C LEARN REPR ICL
[8]   On the integration of symbolic and sub-symbolic techniques for XAI: A survey [J].
Calegari, Roberta ;
Ciatto, Giovanni ;
Omicini, Andrea .
INTELLIGENZA ARTIFICIALE, 2020, 14 (01) :7-32
[9]  
Casale F. P., 2019, Probabilistic neural architecture search
[10]   DENAS Automated Rule Generation by Knowledge Extraction from Neural Networks [J].
Chen, Simin ;
Bateni, Soroush ;
Grandhi, Sampath ;
Li, Xiaodi ;
Liu, Cong ;
Yang, Wei .
PROCEEDINGS OF THE 28TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '20), 2020, :813-825