machine learning;
explainable AI;
economics of AI;
regulation;
fairness;
BLACK-BOX;
D O I:
10.1287/mksc.2022.0396
中图分类号:
F [经济];
学科分类号:
02 ;
摘要:
The most recent artificial intelligence (AI) algorithms lack interpretability. Explainable artificial intelligence (XAI) aims to address this by explaining AI decisions to customers. Although it is commonly believed that the requirement of fully transparent XAI enhances consumer surplus, our paper challenges this view. We present a gametheoretic model where a policymaker maximizes consumer surplus in a duopoly market with heterogeneous customer preferences. Our model integrates AI accuracy, explanation depth, and method. We find that partial explanations can be an equilibrium in an unregulated setting. Furthermore, we identify scenarios where customers' and firms' desires for full explanation are misaligned. In these cases, regulating full explanations may not be socially optimal and could worsen the outcomes for firms and consumers. Flexible XAI policies outperform both full transparency and unregulated extremes.
机构:
Rutgers Business Sch, Newark, NJ 07102 USA
Southwestern Univ Finance & Econ, Res Inst Econ & Management, Chengdu, Peoples R ChinaRutgers Business Sch, Newark, NJ 07102 USA
Zhang, Chanyuan
Cho, Soohyun
论文数: 0引用数: 0
h-index: 0
机构:
Rutgers Business Sch, Newark, NJ 07102 USARutgers Business Sch, Newark, NJ 07102 USA
Cho, Soohyun
Vasarhelyi, Miklos
论文数: 0引用数: 0
h-index: 0
机构:
Rutgers Business Sch, Newark, NJ 07102 USARutgers Business Sch, Newark, NJ 07102 USA