"Everybody knows what a pothole is": representations of work and intelligence in AI practice and governance

被引:1
作者
Bennett, S. J. [1 ]
Catanzariti, Benedetta [2 ]
Tollon, Fabio [3 ]
机构
[1] Univ Durham, Geog, Durham, England
[2] Univ Edinburgh, Sci Technol & Innovat Studies, Edinburgh, Scotland
[3] Univ Edinburgh, Philosophy, Edinburgh, Scotland
基金
英国艺术与人文研究理事会;
关键词
Artificial intelligence; Machine learning; Labour; Automation; Intelligence; Responsible AI;
D O I
10.1007/s00146-024-02162-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we empirically and conceptually examine how distributed human-machine networks of labour comprise a form of underlying intelligence within Artificial Intelligence (AI), considering the implications of this for Responsible Artificial Intelligence (R-AI) innovation. R-AI aims to guide AI research, development and deployment in line with certain normative principles, for example fairness, privacy, and explainability; notions implicitly shaped by comparisons of AI with individualised notions of human intelligence. However, as critical scholarship on AI demonstrates, this is a limited framing of the nature of intelligence, both of humans and AI. Furthermore, it dismisses the skills and labour central to developing AI systems, involving a distributed network of human-directed practices and reasoning. We argue that inequities in the agency and recognition of different types of practitioners across these networks of AI development have implications beyond RAI, with narrow framings concealing considerations which are important within broader discussions of AI intelligence. Drawing from interactive workshops conducted with AI practitioners, we explore practices of data acquisition, cleaning, and annotation, as the point where practitioners interface with domain experts and data annotators. Despite forming a crucial part of AI design and development, this type of data work is frequently framed as a tedious, unskilled, and low-value process. In exploring these practices, we examine the political role of the epistemic framings that underpin AI development and how these framings can shape understandings of distributed intelligence, labour practices, and annotators' agency within data structures. Finally, we reflect on the implications of our findings for developing more participatory and equitable approaches to machine learning applications in the service of R-AI.
引用
收藏
页码:3283 / 3294
页数:12
相关论文
共 51 条
  • [1] Ali SM., 2023, BJHS Themes, V8, P1, DOI [10.1017/bjt.2023.15, DOI 10.1017/BJT.2023.15]
  • [2] Angwin Julia, 2016, PROPUBLICA, DOI DOI 10.1201/9781003278290-37
  • [3] The problem of researching a recursive society: Algorithms, data coils and the looping of the social Comment
    Beer, David
    [J]. BIG DATA & SOCIETY, 2022, 9 (02):
  • [4] Bennett SJ, 2023, Unpicking epistemic injustices in digital health: on the implications of designing data-driven technologies for the management of long-term conditions, DOI [10.1145/3600211.3604684, DOI 10.1145/3600211.3604684]
  • [5] Epistemic injustice in academic global health
    Bhakuni, Himani
    Abimbola, Seye
    [J]. LANCET GLOBAL HEALTH, 2021, 9 (10): : E1465 - E1470
  • [6] Tech workers' perspectives on ethical issues in AI development: Foregrounding feminist approaches
    Browne, Jude
    Drage, Eleanor
    McInerney, Kerry
    [J]. BIG DATA & SOCIETY, 2024, 11 (01)
  • [7] Buolamwini J., 2018, P MACHINE LEARNING R, P77
  • [8] Dastin J., 2018, REUTERS 1010, DOI DOI 10.1017/CBO9781139025751
  • [9] ENLIGHTENMENT CALCULATIONS
    DASTON, L
    [J]. CRITICAL INQUIRY, 1994, 21 (01) : 182 - 202
  • [10] Daston L, 2018, Calculation and the division of labor, 1750-1950'