Prescriptive and Descriptive Approaches to Machine-Learning Transparency

被引:2
作者
Adkins, David [1 ]
Alsallakh, Bilal [1 ]
Cheema, Adeel [1 ]
Kokhlikyan, Narine [1 ]
McReynolds, Emily [1 ]
Mishra, Pushkar [1 ]
Procope, Chavez [1 ]
Sawruk, Jeremy [1 ]
Wang, Erin [1 ]
Zvyagina, Polina [1 ]
机构
[1] Meta AI, Menlo Pk, CA 94025 USA
来源
EXTENDED ABSTRACTS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2022 | 2022年
关键词
Method Cards; Developer Experience; Transparency;
D O I
10.1145/3491101.3519724
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Specialized documentation techniques have been developed to communicate key facts about machine-learning (ML) systems and the datasets and models they rely on. Techniques such as Datasheets, FactSheets, and Model Cards have taken a mainly descriptive approach, providing various details about the system components. While the above information is essential for product developers and external experts to assess whether the ML system meets their requirements, other stakeholders might find it less actionable. In particular, ML engineers need guidance on howto mitigate potential shortcomings in order to fix bugs or improve the system's performance. We survey approaches that aim to provide such guidance in a prescriptive way. We further propose a preliminary approach, called Method Cards, which aims to increase the transparency and reproducibility of ML systems by providing prescriptive documentation of commonly-used ML methods and techniques. We showcase our proposal with an example in small object detection, and demonstrate how Method Cards can communicate key considerations for model developers. We further highlight avenues for improving the user experience of ML engineers based on Method Cards.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] The Transparency Paradox: A Role for Privacy in Organizational Learning and Operational Control
    Bernstein, Ethan S.
    ADMINISTRATIVE SCIENCE QUARTERLY, 2012, 57 (02) : 181 - 216
  • [32] The promises of computational ethnography: Improving transparency, replicability, and validity for realist approaches to ethnographic analysis
    Abramson, Corey M.
    Joslyn, Jacqueline
    Rendle, Katharine A.
    Garrett, Sarah B.
    Dohan, Daniel
    ETHNOGRAPHY, 2018, 19 (02) : 254 - 284
  • [33] Fairness in Machine Learning: A Survey
    Caton, Simon
    Haas, Christian
    ACM COMPUTING SURVEYS, 2024, 56 (07) : 1 - 38
  • [34] Explainable Machine Learning in Deployment
    Bhatt, Umang
    Xiang, Alice
    Sharma, Shubham
    Weller, Adrian
    Taly, Ankur
    Jia, Yunhan
    Ghosh, Joydeep
    Puri, Ruchir
    Moura, Jose M. F.
    Eckersley, Peter
    FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, : 648 - 657
  • [35] MIXED FORMAL LEARNING A Path to Transparent Machine Learning
    Carrico, Sandra
    2019 13TH IEEE INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING (ICSC), 2019, : 486 - 488
  • [36] Transparency and accountability in digital public services: Learning from the Brazilian cases
    Fullin Saldanha, Douglas Morgan
    Dias, Cleidson Nogueira
    Guillaumon, Siegrid
    GOVERNMENT INFORMATION QUARTERLY, 2022, 39 (02)
  • [37] TRANSPARENCY OF PROJECT LEARNING: EXPERIENCE OF PROFESSIONAL TRAINING OF SPECIALISTS IN PUBLIC ADMINISTRATION
    Boronina, L. N.
    Betta, J.
    Baliasov, A. A.
    11TH INTERNATIONAL CONFERENCE OF EDUCATION, RESEARCH AND INNOVATION (ICERI2018), 2018, : 4400 - 4410
  • [38] Transparency, Replication, and Cumulative Learning: What Experiments Alone Cannot Achieve
    Dunning, Thad
    ANNUAL REVIEW OF POLITICAL SCIENCE, VOL 19, 2016, 19 : 541 - 563
  • [39] Transparency: Transitioning From Human-Machine Systems to Human-Swarm Systems
    Roundtree, Karina A.
    Goodrich, Michael A.
    Adams, Julie A.
    JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING, 2019, 13 (03) : 171 - 195
  • [40] Explainable Machine Learning for Trustworthy AI
    Giannotti, Fosca
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2022, 356 : 3 - 3