Regulating for trust: Can law establish trust in artificial intelligence?

被引:2
作者
Tamo-Larrieux, Aurelia [1 ]
Guitton, Clement [2 ]
Mayer, Simon [2 ]
Lutz, Christoph [3 ]
机构
[1] Maastricht Univ, Law & Tech Lab, Bouillonstraat 1-3, NL-6211 LH Maastricht, Netherlands
[2] Univ St Gallen, Inst Comp Sci, St Gallen, Switzerland
[3] BI Norwegian Business Sch, Oslo, Norway
关键词
artificial intelligence; automation; human-automation trust; regulation of technology; trust; trustworthy AI; AUTOMATION; ROBOT; TRUSTWORTHINESS; ANTHROPOMORPHISM; METAANALYSIS; PROPENSITY; STRATEGIES; COMPUTERS; OVERTRUST; ATTITUDES;
D O I
10.1111/rego.12568
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
The current political and regulatory discourse frequently references the term "trustworthy artificial intelligence (AI)." In Europe, the attempts to ensure trustworthy AI started already with the High-Level Expert Group Ethics Guidelines for Trustworthy AI and have now merged into the regulatory discourse on the EU AI Act. Around the globe, policymakers are actively pursuing initiatives-as the US Executive Order on Safe, Secure, and Trustworthy AI, or the Bletchley Declaration on AI showcase-based on the premise that the right regulatory strategy can shape trust in AI. To analyze the validity of this premise, we propose to consider the broader literature on trust in automation. On this basis, we constructed a framework to analyze 16 factors that impact trust in AI and automation more broadly. We analyze the interplay between these factors and disentangle them to determine the impact regulation can have on each. The article thus provides policymakers and legal scholars with a foundation to gauge different regulatory strategies, notably by differentiating between those strategies where regulation is more likely to also influence trust on AI (e.g., regulating the types of tasks that AI may fulfill) and those where its influence on trust is more limited (e.g., measures that increase awareness of complacency and automation biases). Our analysis underscores the critical role of nuanced regulation in shaping the human-automation relationship and offers a targeted approach to policymakers to debate how to streamline regulatory efforts for future AI governance.
引用
收藏
页码:780 / 801
页数:22
相关论文
共 128 条
  • [1] Unraveling the Personalization Paradox: The Effect of Information Collection and Trust-Building Strategies on Online Advertisement Effectiveness
    Aguirre, Elizabeth
    Mahr, Dominik
    Grewal, Dhruv
    de Ruyter, Ko
    Wetzels, Martin
    [J]. JOURNAL OF RETAILING, 2015, 91 (01) : 34 - 49
  • [2] [Anonymous], 2006, Trust: Reason, routine, reflexivity
  • [3] [Anonymous], 2002, P 2 NORDIC C HUMAN C
  • [4] [Anonymous], 2021, ECONOMIST 0603
  • [5] Overtrusting robots: Setting a research agenda to mitigate overtrust in automation
    Aroyo A.M.
    De Bruyne J.
    Dheu O.
    Fosch-Villaronga E.
    Gudkov A.
    Hoch H.
    Jones S.
    Lutz C.
    Sætra H.
    Solberg M.
    Tamò-Larrieux A.
    [J]. Paladyn, 2021, 12 (01): : 423 - 436
  • [6] Bagheri N, 2004, HUMAN PERFORMANCE, SITUATION AWARENESS AND AUTOMATION: CURRENT RESEARCH AND TRENDS, VOL 2, P54
  • [7] TRUST AND ANTITRUST
    BAIER, A
    [J]. ETHICS, 1986, 96 (02) : 231 - 260
  • [8] The Benefits of Interactions with Physically Present Robots over Video-Displayed Agents
    Bainbridge, Wilma A.
    Hart, Justin W.
    Kim, Elizabeth S.
    Scassellati, Brian
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2011, 3 (01) : 41 - 52
  • [9] Balkin Jack M., 2020, HARVARD LAW REV FORU, V134, P11
  • [10] Barocas S., 2017, NeurIPS Tutorial, V2017