Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study

被引:6
作者
Scharowski, Nicolas [1 ]
Benk, Michaela [2 ]
Kuhne, Swen J. [3 ]
Wettstein, Leane [1 ]
Bruhlmann, Florian [1 ]
机构
[1] Univ Basel, Basel, Switzerland
[2] Swiss Fed Inst Technol, Mobiliar Lab Analyt, Zurich, Switzerland
[3] Zurich Univ Appl Sci, Sch Appl Psychol, Zurich, Switzerland
来源
PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023 | 2023年
关键词
AI; Audit; Documentation; Label; Seal; Certification; Trust; Trustworthy; User study; ARTIFICIAL-INTELLIGENCE; ETHICS; TRUST;
D O I
10.1145/3593013.3593994
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Auditing plays a pivotal role in the development of trustworthy AI. However, current research primarily focuses on creating auditable AI documentation, which is intended for regulators and experts rather than end-users affected by AI decisions. How to communicate to members of the public that an AI has been audited and considered trustworthy remains an open challenge. This study empirically investigated certification labels as a promising solution. Through interviews (N = 12) and a census-representative survey (N = 302), we investigated end-users' attitudes toward certification labels and their effectiveness in communicating trustworthiness in low- and high-stakes AI scenarios. Based on the survey results, we demonstrate that labels can significantly increase end-users' trust and willingness to use AI in both low- and high-stakes scenarios. However, end-users' preferences for certification labels and their effect on trust and willingness to use AI were more pronounced in high-stake scenarios. Qualitative content analysis of the interviews revealed opportunities and limitations of certification labels, as well as facilitators and inhibitors for the effective use of labels in the context of AI. For example, while certification labels can mitigate data-related concerns expressed by end-users (e.g., privacy and data protection), other concerns (e.g., model performance) are more challenging to address. Our study provides valuable insights and recommendations for designing and implementing certification labels as a promising constituent within the trustworthy AI ecosystem.
引用
收藏
页码:248 / 260
页数:13
相关论文
共 72 条
  • [1] [Anonymous], 2008, 10282008 IEEE, P1, DOI DOI 10.1109/IEEESTD.2008.4601584
  • [2] FactSheets: Increasing trust in AI services through supplier's declarations of conformity
    Arnold, M.
    Bellamy, R. K. E.
    Hind, M.
    Houde, S.
    Mehta, S.
    Mojsilovic, A.
    Nair, R.
    Ramamurthy, K. Natesan
    Olteanu, A.
    Piorkowski, D.
    Reimer, D.
    Richards, J.
    Tsay, J.
    Varshney, K. R.
    [J]. IBM JOURNAL OF RESEARCH AND DEVELOPMENT, 2019, 63 (4-5)
  • [3] Filling gaps in trustworthy development of AIFilling gaps in trustworthy development of AI
    Avin, Shahar
    Belfield, Haydn
    Brundage, Miles
    Krueger, Gretchen
    Wang, Jasmine
    Weller, Adrian
    Anderljung, Markus
    Krawczuk, Igor
    Krueger, David
    Lebensold, Jonathan
    Maharaj, Tegan
    Zilberman, Noa
    [J]. SCIENCE, 2021, 374 (6573) : 1327 - 1329
  • [4] Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits
    Bandy J.
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2021, 5 (CSCW1)
  • [5] Explainable Machine Learning in Deployment
    Bhatt, Umang
    Xiang, Alice
    Sharma, Shubham
    Weller, Adrian
    Taly, Ankur
    Jia, Yunhan
    Ghosh, Joydeep
    Puri, Ruchir
    Moura, Jose M. F.
    Eckersley, Peter
    [J]. FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, : 648 - 657
  • [6] 'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions
    Binns, Reuben
    Van Kleek, Max
    Veale, Michael
    Lyngs, Ulrik
    Zhao, Jun
    Shadbolt, Nigel
    [J]. PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
  • [7] Bruhlmann F, 2020, Methods Psychol, V2, DOI DOI 10.1016/J.METIP.2020.100022
  • [8] Brundage M, 2020, Arxiv, DOI [arXiv:2004.07213, DOI 10.48550/ARXIV.2004.07213]
  • [9] Castelfranchi C., 2010, TRUST THEORY SOCIO C, DOI [DOI 10.1002/9780470519851, https://doi.org/10.1002/9780470519851]
  • [10] Trusting Automation: Designing for Responsivity and Resilience
    Chiou, Erin K.
    Lee, John D.
    [J]. HUMAN FACTORS, 2023, 65 (01) : 137 - 165