“If it is easy to understand, then it will have value”: Examining Perceptions of Explainable AI with Community Health Workers in Rural India

被引:2
作者
Okolo C.T. [1 ]
Agarwal D. [2 ]
Dell N. [3 ]
Vashistha A. [2 ]
机构
[1] Cornell University, 350 Gates Hall, Ithaca, NY
[2] Information Science, Cornell University, Ithaca, NY
[3] Information Science, Cornell Tech, New York, NY
基金
美国国家科学基金会;
关键词
Artificial Intelligence; Community Health Workers; Explainability; Global South; HCI4D; ICTD; Machine Learning; Mobile Health; XAI4D;
D O I
10.1145/3637348
中图分类号
学科分类号
摘要
AI-driven tools are increasingly deployed to support low-skilled community health workers (CHWs) in hard-to-reach communities in the Global South. This paper examines how CHWs in rural India engage with and perceive AI explanations and how we might design explainable AI (XAI) interfaces that are more understandable to them. We conducted semi-structured interviews with CHWs who interacted with a design probe to predict neonatal jaundice in which AI recommendations are accompanied by explanations. We (1) identify how CHWs interpreted AI predictions and the associated explanations, (2) unpack the benefits and pitfalls they perceived of the explanations, and (3) detail how different design elements of the explanations impacted their AI understanding. Our findings demonstrate that while CHWs struggled to understand the AI explanations, they nevertheless expressed a strong preference for the explanations to be integrated into AI-driven tools and perceived several benefits of the explanations, such as helping CHWs learn new skills and improved patient trust in AI tools and in CHWs. We conclude by discussing what elements of AI need to be made explainable to novice AI users like CHWs and outline concrete design recommendations to improve the utility of XAI for novice AI users in non-Western contexts. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
引用
收藏
相关论文
共 136 条
[31]  
DeRenzi B., Lesh N., Parikh T., Sims C., Maokla W., Chemba M., Hamisi Y., Hellenberg D.S., Mitchell M., Borriello G., E-IMCI: Improving pediatric health care in low-income countries, Proceedings of the SIGCHI conference on human factors in computing systems, pp. 753-762, (2008)
[32]  
DeRenzi B., Wacksman J., Dell N., Lee S., Lesh N., Borriello G., Ellner A., Closing the feedback Loop: A 12-month evaluation of ASTA, a self-tracking application for ASHAs, Proceedings of the Eighth International Conference on Information and Communication Technologies and Development, pp. 1-10, (2016)
[33]  
Dhanorkar S., Wolf C.T., Qian K., Xu A., Popa L., Li Y., Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle, Designing Interactive Systems Conference, 2021, pp. 1591-1602, (2021)
[34]  
Dhinakaran A., A Look Into Global, Cohort and Local Model Explainability, (2022)
[35]  
Dodge J., Liao Q.V., Zhang Y., Bellamy R.K.E., Dugan C., Explaining models: an empirical study of how explanations impact fairness judgment, Proceedings of the 24th international conference on intelligent user interfaces, pp. 275-285, (2019)
[36]  
Ehsan U., Liao Q.V., Muller M., Riedl M.O., Weisz J.D., Expanding explainability: Towards social transparency in ai systems, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1-19, (2021)
[37]  
Ehsan U., Riedl M.O., Human-centered explainable ai: Towards a reflective sociotechnical approach, International Conference on Human-Computer Interaction, pp. 449-466, (2020)
[38]  
Ehsan U., Riedl M.O., Explainability pitfalls: Beyond dark patterns in explainable AI, (2021)
[39]  
Fausto M.C.R., Giovanella L., de Mendonca M.H.M., de Almeida P.F., Escorel S., de Andrade C.L.T., Martins M.I.C., The work of community health workers in major cities in Brazil: mediation, community action, and health care, The Journal of ambulatory care management, 34, 4, pp. 339-353, (2011)
[40]  
Forster M., Klier M., Kluge K., Sigler I., Fostering human agency: a process for the design of user-centric XAI systems, (2020)