EU Artificial Intelligence Act in Banking Sector: Impact and Implementation

被引:0
|
作者
Salek, Pavel [1 ]
机构
[1] Prague Univ Econ & Business, Prague, Czech Republic
关键词
AI; artificial intelligence act; explainability; interpretability; regulation; machine learning;
D O I
暂无
中图分类号
F [经济];
学科分类号
02 ;
摘要
The European Union Commission's regulatory proposal on Artificial Intelligence will enter into force within the next two years. The focus of the Act is the introduction of regulatory and legal frameworks for the application of the artificial intelligence across European Union. The Act divides the implementation of AI into four main categories by the risk involved with their possible applications. The risks assessed are mainly opacity, complexity, unpredictability, autonomy, and data. The systems in the low risk category are permitted to use without any restrictions. The next category is AI systems with specific transparency obligations (i.e., chatbots, deep fakes). The high risk category involves systems that are products of other safety regulations (i.e., machinery, toys, and medical care) and systems listed by the European Commission as high risk. The AI systems in this category must be sufficiently transparent to enable users to understand and control how the high-risk AI system produces its output. The last category is systems with unacceptable risk where the application of AI is prohibited. Systems that can cause physical or emotional harm (i.e., social scoring) fall into this category. This article aims to assess the impact of the obligations in different risk categories on the AI systems and discuss the potential explainability and interpretability techniques that can be used to ensure the successful implementation of the Act.
引用
收藏
页码:348 / 353
页数:6
相关论文
共 50 条