The fallacy of inscrutability

被引:63
作者
Kroll, Joshua A. [1 ]
机构
[1] Univ Calif Berkeley, Sch Informat, Berkeley, CA 94720 USA
来源
PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES | 2018年 / 376卷 / 2133期
关键词
machine learning; artificial intelligence; governance; accountability;
D O I
10.1098/rsta.2018.0084
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Contrary to the criticism that mysterious, unaccountable black-box software systems threaten to make the logic of critical decisions inscrutable, we argue that algorithms are fundamentally understandable pieces of technology. Software systems are designed to interact with the world in a controlled way and built or operated for a specific purpose, subject to choices and assumptions. Traditional power structures can and do turn systems into opaque black boxes, but technologies can always be understood at a higher level, intensionally in terms of their designs and operational goals and extensionally in terms of their inputs, outputs and outcomes. The mechanisms of a system's operation can always be examined and explained, but a focus on machinery obscures the key issue of power dynamics. While structural inscrutability frustrates users and oversight entities, system creators and operators always determine that the technologies they deploy are fit for certain uses, making no system wholly inscrutable. We investigate the contours of inscrutability and opacity, the way they arise from power dynamics surrounding software systems, and the value of proposed remedies from disparate disciplines, especially computer ethics and privacy by design. We conclude that policy should not accede to the idea that some systems are of necessity inscrutable. Effective governance of algorithms comes from demanding rigorous science and engineering in system design, operation and evaluation to make systems verifiably trustworthy. Rather than seeking explanations for each behaviour of a computer system, policies should formalize and make known the assumptions, choices, and adequacy determinations associated with a system. This article is part of the theme issue 'Governing artificial intelligence: ethical, legal, and technical opportunities and challenges'.
引用
收藏
页数:14
相关论文
共 51 条
  • [1] Abdul Ashraf., 2018, P INT C HUM FACT COM
  • [2] Angwin J., PROPUBLICA
  • [3] [Anonymous], 2017, U. Pa. L. Rev. Online
  • [4] [Anonymous], 2016, Inherent trade-offs in the fair determination of risk scores
  • [5] [Anonymous], 2014, BIG DATA SEIZING OPP
  • [6] ARGYRIS C, 1977, HARVARD BUS REV, V55, P115
  • [7] Balkin JackM., 2017, OHIO ST L J, V78, P1217
  • [8] Big Data's Disparate Impact
    Barocas, Solon
    Selbst, Andrew D.
    [J]. CALIFORNIA LAW REVIEW, 2016, 104 (03) : 671 - 732
  • [9] Bashir Muhammad Ahmad, 2016, P 2016 INT MEAS C
  • [10] Beck K., 2003, Test Driven Development: By Example