Dislocated accountabilities in the "AI supply chain": Modularity and developers' notions of responsibility

被引:36
作者
Widder, David Gray [1 ]
Nafus, Dawn [2 ]
机构
[1] Carnegie Mellon Univ, Sch Comp Sci, Pittsburgh, PA 15213 USA
[2] Intel Labs, Portland, OR USA
关键词
Modularity; software engineering; supply chain; artificial intelligence; ethics; located accountability; ETHICS;
D O I
10.1177/20539517231177620
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible artificial intelligence guidelines to be within their agency, capability, or responsibility to address. We use Suchman's "located accountability" to show how responsible artificial intelligence labor is currently organized and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible artificial intelligence actions do take place and which are relegated to low status staff or believed to be the work of the next or previous person in the imagined "supply chain." We argue that current responsible artificial intelligence interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could be improved by taking a located accountability approach, recognizing where relations and obligations might intertwine inside and outside of this supply chain.
引用
收藏
页数:12
相关论文
共 64 条
[1]  
Agre Philip, 1997, Bridging the Great Divide: Social Science, Technical Systems, and Cooperative Work
[2]  
[Anonymous], 1989, Economic Anthropology
[3]  
Baldwin C. Y., 2000, Design Rules: The Power of Modularity, V1
[4]   On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [J].
Bender, Emily M. ;
Gebru, Timnit ;
McMillan-Major, Angelina ;
Shmitchell, Shmargaret .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :610-623
[5]  
Bennett CL., 2019, ACM C HUMAN FACTORS, P1, DOI 101145/3290605.3300528
[6]  
Buolamwini J., 2018, Conference on Fairness, Accountability and Transparency, V81, P77, DOI DOI 10.2147/OTT.S126905
[7]  
Callon M, 1998, LAWS OF THE MARKETS, P1, DOI 10.1111/j.1467-954X.1998.tb03468.x
[9]  
Carroll S. R., 2020, The CARE principles for indigenous data governance
[10]  
Chmielinski KS, 2022, Arxiv, DOI arXiv:2201.03954