Understanding user sensemaking in fairness and transparency in algorithms: algorithmic sensemaking in over-the-top platform

被引:53
作者
Shin, Donghee [1 ]
Lim, Joon Soo [2 ]
Ahmad, Norita [3 ,4 ]
Ibahrine, Mohammed [4 ]
机构
[1] Zayed Univ, Coll Commun & Media Sci, Dubai, U Arab Emirates
[2] Syracuse Univ, Newhouse Sch Publ Commun, Syracuse, NY USA
[3] Ctr Innovat Teaching & Learning, Sharjah, U Arab Emirates
[4] American Univ Sharjah, Sharjah, U Arab Emirates
关键词
Algorithmic normative values; Transparent fairness; OTT platforms; Algorithmic sensemaking; Algorithmic credibility; Algorithmic information processing;
D O I
10.1007/s00146-022-01525-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A number of artificial intelligence (AI) systems have been proposed to assist users in identifying the issues of algorithmic fairness and transparency. These AI systems use diverse bias detection methods from various perspectives, including exploratory cues, interpretable tools, and revealing algorithms. This study explains the design of AI systems by probing how users make sense of fairness and transparency as they are hypothetical in nature, with no specific ways for evaluation. Focusing on individual perceptions of fairness and transparency, this study examines the roles of normative values in over-the-top (OTT) platforms by empirically testing their effects on sensemaking processes. A mixed-method design incorporating both qualitative and quantitative approaches was used to discover user heuristics and to test the effects of such normative values on user acceptance. Collectively, a composite concept of transparent fairness emerged around user sensemaking processes and its formative roles regarding their underlying relations to perceived quality and credibility. From a sensemaking perspective, this study discusses the implications of transparent fairness in algorithmic media platforms by clarifying how and what should be done to make algorithmic media more trustable and reliable platforms. Based on the findings, a theoretical model is developed to define transparent fairness as an essential algorithmic attribute in the context of OTT platforms.
引用
收藏
页码:477 / 490
页数:14
相关论文
共 30 条
[1]   Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability [J].
Ananny, Mike ;
Crawford, Kate .
NEW MEDIA & SOCIETY, 2018, 20 (03) :973-989
[2]  
Chen L, 2012, USER MODEL USER-ADAP, V22, P125, DOI [10.1007/s11257-011-9108-6, 10.1007/s11257-011-9115-7]
[3]   The limits of transparency: Data brokers and commodification [J].
Crain, Matthew .
NEW MEDIA & SOCIETY, 2018, 20 (01) :88-104
[4]  
Dervin B., 2003, SENSE MAKING METHODO, P141
[5]   ALGORITHMIC TRANSPARENCY IN THE NEWS MEDIA [J].
Diakopoulos, Nicholas ;
Koliska, Michael .
DIGITAL JOURNALISM, 2017, 5 (07) :809-828
[6]   Understanding User Sensemaking in Machine Learning Fairness Assessment Systems [J].
Gu, Ziwei ;
Yan, Jing Nathan ;
Rzeszotarski, Jeffrey M. .
PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, :658-668
[7]   Exposure diversity as a design principle for recommender systems [J].
Helberger, Natali ;
Karppinen, Kari ;
D'Acunto, Lucia .
INFORMATION COMMUNICATION & SOCIETY, 2018, 21 (02) :191-207
[8]   Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse [J].
Hoffmann, Anna Lauren .
INFORMATION COMMUNICATION & SOCIETY, 2019, 22 (07) :900-915
[9]   Governance by algorithms: reality construction by algorithmic selection on the Internet [J].
Just, Natascha ;
Latzer, Michael .
MEDIA CULTURE & SOCIETY, 2017, 39 (02) :238-258
[10]   Transparent to whom? No algorithmic accountability without a critical audience [J].
Kemper, Jakko ;
Kolkman, Daan .
INFORMATION COMMUNICATION & SOCIETY, 2019, 22 (14) :2081-2096