Explainable Artificial Intelligence as an Ethical Principle

被引:0
作者
Gonzalez-Arencibia, Mario [1 ]
Ordonez-Erazo, Hugo [2 ]
Gonzalez-Sanabria, Juan-Sebastian [3 ]
机构
[1] Univ Ciencias Informat, Havana, Cuba
[2] Univ Cauca, Telemat Engn, Popayan, Colombia
[3] Univ Pedag & Tecnol Colombia, Tunja, Colombia
来源
INGENIERIA | 2024年 / 29卷 / 02期
关键词
artificial intelligence; AI; ethics; ethical principles; explainability; transparency;
D O I
10.14483/23448393.21583
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Context: The advancement of artificial intelligence (AI) has brought numerous benefits in various fields. However, it also poses ethical challenges that must be addressed. One of these is the lack of explainability in AI systems, i.e. , the inability to understand how AI makes decisions or generates results. This raises questions about the transparency and accountability of these technologies. This lack of explainability hinders the understanding of how AI systems reach conclusions, which can lead to user distrust and affect the adoption of such technologies in critical sectors ( e.g., medicine or justice). In addition, there are ethical dilemmas regarding responsibility and bias in AI algorithms. Method: Considering the above, there is a research gap related to studying the importance of explainable AI from an ethical point of view. The research question is what is the ethical impact of the lack of explainability in AI systems and how can it be addressed? The aim of this work is to understand the ethical implications of this issue and to propose methods for addressing it. Results: Our findings reveal that the lack of explainability in AI systems can have negative consequences in terms of trust and accountability.Users can become frustrated by not understanding how a certain decision is made, potentially leading to mistrust of the technology.In addition, the lack of explainability makes it difficult to identify and correct biases in AI algorithms, which can perpetuate injustices and discrimination. Conclusions: The main conclusion of this research is that AI must be ethically explainable in order to ensure transparency and accountability.It is necessary to develop tools and methodologies that allow understanding how AI systems work and how they make decisions. It is also important to foster multidisciplinary collaboration between experts in AI, ethics, and human rights to address this challenge comprehensively.
引用
收藏
页数:19
相关论文
共 31 条
[1]  
Adamson G., 2022, PREPRINT, DOI [10.36227/techrxiv.20439192.v1fi3, DOI 10.36227/TECHRXIV.20439192.V1FI3]
[2]  
[Anonymous], 2019, Ethics Guidelines for Trustworthy AI
[3]  
Arrieta A. B., 2020, Art. Inte., V290, P1
[4]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[5]  
BRYNJOLFSSON E.A. MCAFEE., 2014, The Second Machine Age: Work, Progress, and Prosperity in A Time of Brilliant Technologies
[6]   How the machine 'thinks': Understanding opacity in machine learning algorithms [J].
Burrell, Jenna .
BIG DATA & SOCIETY, 2016, 3 (01) :1-12
[7]  
Lipton ZC, 2017, Arxiv, DOI [arXiv:1606.03490, DOI 10.48550/ARXIV.1606.03490, 10.48550/arXiv.1606.03490]
[8]   Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability [J].
Coeckelbergh, Mark .
SCIENCE AND ENGINEERING ETHICS, 2020, 26 (04) :2051-2068
[9]  
Das A., 2020, PREPRINT, DOI DOI 10.48550/ARXIV.2006.11371
[10]  
Dignum V, 2019, ARTIF INTELL-FOUND, P1, DOI 10.1007/978-3-030-30371-6