Algorithmic self-referentiality: How machine learning pushes calculative practices to assess themselves

被引:1
作者
Millo, Yuval [1 ]
Spence, Crawford [2 ]
Xu, Ruowen [1 ]
机构
[1] Univ Warwick, Coventry, England
[2] Kings Coll London, London, England
关键词
Credit scoring; Calculative culture; Calculative practices; Machine learning; Algorithmic bias; BIG DATA; MARKET; TECHNOLOGY; RANKING; SYSTEMS; POWER; WORK; RISK;
D O I
10.1016/j.aos.2024.101567
中图分类号
F8 [财政、金融];
学科分类号
0202 ;
摘要
Despite the growing importance of machine learning in today's organisations, we know relatively little about how machine learning operates and how it influences calculative practices and cultures. Based on 695 hours of ethnographic fieldwork in the team of credit modellers from a large internet company in China, this study analyses the calculative culture that underpins the development of credit models. We show that credit scoring methodologies develop progressively into a self-referential set of calculative practices where substantive concerns about loan default are supplanted by more insular concerns around the seamless operation of the model. Insofar as the latter can only be measured by the model itself, this reduces the role of calculative experts to facilitators of machine learning rather than the purposeful interpreters of machine learning produced data. In this regard, credit scoring experts focus more on ensuring that models have a robust conversation with themselves rather than with managers or credit scoring agents. This matters because machine learning-driven credit scoring models end up privileging access to credit for those whose data trails more readily pass through data preparation filters rather than those who are less likely to default. We thus contribute to an understanding of how machine learning-driven calculative cultures both enact algorithmic bias and operate beyond the ken of purposeful human actors.
引用
收藏
页数:12
相关论文
共 115 条
  • [1] Abbott AD., 2004, METHODS DISCOVERY HE
  • [2] Aitken R, 2017, COMPET CHANG, V21, P274, DOI 10.1177/1024529417712830
  • [3] Power to the People: The Role of Humans in Interactive Machine Learning
    Amershi, Saleema
    Cakmak, Maya
    Knox, W. Bradley
    Kulesza, Todd
    [J]. AI MAGAZINE, 2014, 35 (04) : 105 - 120
  • [4] Amoor L., 2020, Cloud Ethics
  • [5] [Anonymous], 2017, Intelligent credit scoring: Building and implementing better credit risk scorecards
  • [6] [Anonymous], 2001, Mechanizing Proof: Computing, Risk, and Trust
  • [7] [Anonymous], 2019, FINANCIAL INCLUSION
  • [8] When Knowledge Work and Analytical Technologies Collide: The Practices and Consequences of Black Boxing Algorithmic Technologies
    Anthony, Callen
    [J]. ADMINISTRATIVE SCIENCE QUARTERLY, 2021, 66 (04) : 1173 - 1212
  • [9] Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review
    Antoniadi, Anna Markella
    Du, Yuhan
    Guendouz, Yasmine
    Wei, Lan
    Mazo, Claudia
    Becker, Brett A.
    Mooney, Catherine
    [J]. APPLIED SCIENCES-BASEL, 2021, 11 (11):
  • [10] SUBSTITUTING HUMAN DECISION-MAKING WITH MACHINE LEARNING: IMPLICATIONS FOR ORGANIZATIONAL LEARNING
    Balasubramanian, Natarajan
    Ye, Yang
    Xu, Mingtao
    [J]. ACADEMY OF MANAGEMENT REVIEW, 2022, 47 (03) : 448 - 465