Discovering and Validating AI Errors with Crowdsourced Failure Reports

被引:28
作者
Cabrera Á.A. [1 ]
Druck A.J. [1 ]
Hong J.I. [1 ]
Perer A. [1 ]
机构
[1] Carnegie Mellon University, Pittsburgh, PA
基金
美国国家科学基金会;
关键词
blind spots; crowdsourcing; debugging; machine learning; visual analytics;
D O I
10.1145/3479569
中图分类号
学科分类号
摘要
AI systems can fail to learn important behaviors, leading to real-world issues like safety concerns and biases. Discovering these systematic failures often requires significant developer attention, from hypothesizing potential edge cases to collecting evidence and validating patterns. To scale and streamline this process, we introduce crowdsourced failure reports, end-user descriptions of how or why a model failed, and show how developers can use them to detect AI errors. We also design and implement Deblinder, a visual analytics system for synthesizing failure reports that developers can use to discover and validate systematic failures. In semi-structured interviews and think-aloud studies with 10 AI practitioners, we explore the affordances of the Deblinder system and the applicability of failure reports in real-world settings. Lastly, we show how collecting additional data from the groups identified by developers can improve model performance. © 2021 ACM.
引用
收藏
相关论文
共 67 条
[1]  
Ahn Y., Ru Lin Y., Fairsight: Visual analytics for fairness in decision making, IEEE Transactions on Visualization and Computer Graphics, 26, 1, pp. 1086-1095, (2020)
[2]  
Amershi S., Begel A., Bird C., DeLine R., Gall H., Kamar E., Nagappan N., Nushi B., Zimmermann T., Software engineering for machine learning: A case study, Proceedings-2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice, ICSE-SEIP 2019, pp. 291-300, (2019)
[3]  
Andre P., Kittur A., Dow S.P., Crowd synthesis: Extracting categories and clusters from complex data, Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, pp. 989-998, (2014)
[4]  
Attenberg J.M., Ipeirotis P.G., Provost F., Beat the machine: Challenging workers to find the unknown unknowns, Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence, (2011)
[5]  
Bansal G., Weld D.S., A coverage-based utility model for identifying unknown unknowns, 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 1463-1470, (2018)
[6]  
Barocas S., Selbst A.D., Big data's disparate impact, SSRN Electronic Journal 671, pp. 671-732, (2018)
[7]  
Bederson B.B., Meyer J., Good L., Jazz: An extensible zoomable user interface graphics toolkit in Java, UIST (User Interface Software and Technology): Proceedings of the ACM Symposium, pp. 171-180, (2000)
[8]  
Bettenburg N., Just S., Schroter A., Weiss C., Premraj R., Zimmermann T., What makes a good bug report?, Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering (Atlanta, Georgia) (SIGSOFT '08/FSE-16), pp. 308-318, (2008)
[9]  
Bettenburg N., Premraj R., Zimmermann T., Kim S., Duplicate bug reports considered harmful... really, 2008 IEEE International Conference on Software Maintenance. IEEE, pp. 337-345
[10]  
Buolamwini J., Gebru T., Gender shades: Intersectional accuracy disparities in commercial gender classification, Conference on Fairness, Accountability and Transparency, pp. 77-91, (2018)