On the Role of Explainable Machine Learning for Secure Smart Vehicles

被引:4
作者
Scalas, Michele [1 ]
Giacinto, Giorgio [1 ]
机构
[1] Univ Cagliari, Dept Elect & Elect Engn, Cagliari, Italy
来源
2020 AEIT INTERNATIONAL CONFERENCE OF ELECTRICAL AND ELECTRONIC TECHNOLOGIES FOR AUTOMOTIVE (AEIT AUTOMOTIVE) | 2020年
关键词
Explainability; Cybersecurity; Machine learning; Mobility; Smart Vehicles; Automotive; Connected Cars; Autonomous Driving;
D O I
10.23919/aeitautomotive50086.2020.9307431
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The concept of mobility is experiencing a serious transformation due to the Mobility-as-a-Service paradigm. Accordingly, vehicles, usually referred to as smart, are seeing their architecture revamped to integrate connection to the outside environment (V2X) and autonomous driving. A significant part of these innovations is enabled by machine learning. However, deploying such systems raises some concerns. First, the complexity of the algorithms often prevents understanding what these models learn, which is relevant in the safety-critical context of mobility. Second, several studies have demonstrated the vulnerability of machine learning-based algorithms to adversarial attacks. For these reasons, research on the explainability of machine learning is raising. In this paper, we then explore the role of interpretable machine learning in the ecosystem of smart vehicles, with the goal of figuring out if and in what terms explanations help to design secure vehicles. We provide an overview of the potential uses of explainable machine learning, along with recent work in the literature that has started to investigate the topic, including from the perspectives of human-agent systems and cyber-physical systems. Our analysis highlights both benefits and criticalities in employing explanations.
引用
收藏
页数:6
相关论文
共 34 条
[1]  
ACEA, 2017, PRINC AUT CYB
[2]  
[Anonymous], 2013, Dependable Systems and Networks Workshop (DSN-W), 2013 43rd Annual IEEE/IFIP Conference on
[3]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[4]  
Borg M., 2019, EXPLAINABLE SOFTWARE
[5]   Resource-Constrained Machine Learning for ADAS: A Systematic Review [J].
Borrego-Carazo, Juan ;
Castells-Rufas, David ;
Biempica, Ernesto ;
Carrabina, Jordi .
IEEE ACCESS, 2020, 8 :40573-40598
[6]  
Center for Automotive Research, 2017, AUT PROD DEV CYCL NE
[7]   Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection [J].
Demontis, Ambra ;
Melis, Marco ;
Biggio, Battista ;
Maiorca, Davide ;
Arp, Daniel ;
Rieck, Konrad ;
Corona, Igino ;
Giacinto, Giorgio ;
Roli, Fabio .
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2019, 16 (04) :711-724
[8]  
Doshi-Velez F., 2017, ARXIV170208608, V1702, P08608
[9]  
Europe E., 2020, TRANSP TRENDS EC
[10]  
European Union Agency for Cybersecurity, 2019, ENISA GOOD PRACT SEC