Exploration of Explainable AI in Context of Human-Machine Interface for the Assistive Driving System

被引:4
作者
Chaczko, Zenon [1 ,3 ]
Kulbacki, Marek [2 ,3 ]
Gudzbeler, Grzegorz [4 ]
Alsawwaf, Mohammad [1 ,6 ]
Thai-Chyzhykau, Ilya [1 ]
Wajs-Chaczko, Peter [5 ]
机构
[1] Univ Technol Sydney, Fac Engn & IT, Ultimo, NSW, Australia
[2] Polish Japanese Acad Informat Technol, R&D Ctr, Warsaw, Poland
[3] DIVE IN AI, Wroclaw, Poland
[4] Univ Warsaw, Fac Polit Sci & Int Studies, Warsaw, Poland
[5] Macquarie Univ, Sydney, NSW, Australia
[6] Imam Abdulrahman bin Faisal Univ, Dammam, Saudi Arabia
来源
INTELLIGENT INFORMATION AND DATABASE SYSTEMS (ACIIDS 2020), PT II | 2020年 / 12034卷
关键词
Explainable AI; HMI; Convolutional Neural Network; Assistive system for vehicles;
D O I
10.1007/978-3-030-42058-1_42
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents the application and issues related to explainable AI in context of a driving assistive system. One of the key functions of the assistive system is to signal potential risks or hazards to the driver in order to allow for prompt actions and timely attention to possible problems occurring on the road. The decision making of an AI component needs to be explainable in order to minimise the time it takes for a driver to decide on whether any action is necessary to avoid the risk of collision or crash. In the explored cases, the autonomous system does not act as a "replacement" for the human driver, instead, its role is to assist the driver to respond to challenging driving situations, possibly difficult manoeuvres or complex road scenarios. The proposed solution validates the XAI approach for the design of a safety and security system that is able to identify and highlight potential risk in autonomous vehicles.
引用
收藏
页码:507 / 516
页数:10
相关论文
共 14 条
[1]   Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda [J].
Abdul, Ashraf ;
Vermeulen, Jo ;
Wang, Danding ;
Lim, Brian ;
Kankanhalli, Mohan .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
[2]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[3]  
[Anonymous], CORR
[4]  
Doshi-Velez F., 2017, ARXIV170208608, V1702, P08608
[5]   THE OUT-OF-THE-LOOP PERFORMANCE PROBLEM AND LEVEL OF CONTROL IN AUTOMATION [J].
ENDSLEY, MR ;
KIRIS, EO .
HUMAN FACTORS, 1995, 37 (02) :381-394
[6]  
Flint A., 2019, XAI TOOLKIT PRACTICA
[7]   Explainable AI: The New 42? [J].
Goebel, Randy ;
Chander, Ajay ;
Holzinger, Katharina ;
Lecue, Freddy ;
Akata, Zeynep ;
Stumpf, Simone ;
Kieseberg, Peter ;
Holzinger, Andreas .
MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2018, 2018, 11015 :295-303
[8]  
Gu Jiuxiang., 2015, CoRR
[9]   Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI [J].
Holzinger, Andreas ;
Kieseberg, Peter ;
Weippl, Edgar ;
Tjoa, A. Min .
MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2018, 2018, 11015 :1-8
[10]   SSD: Single Shot MultiBox Detector [J].
Liu, Wei ;
Anguelov, Dragomir ;
Erhan, Dumitru ;
Szegedy, Christian ;
Reed, Scott ;
Fu, Cheng-Yang ;
Berg, Alexander C. .
COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 :21-37