Measuring and Mitigating Bias in AI-Chatbots

被引:8
作者
Beattie, Hedin [1 ]
Watkins, Lanier [1 ]
Robinson, William H. [2 ]
Rubin, Aviel [1 ]
Watkins, Shari [3 ]
机构
[1] Johns Hopkins Univ, Baltimore, MD 21218 USA
[2] Vanderbilt Univ, Nashville, TN 37235 USA
[3] Amer Univ, Washington, DC 20016 USA
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ASSURED AUTONOMY (ICAA 2022) | 2022年
关键词
chatbot; AI; ethics; NLP;
D O I
10.1109/ICAA52185.2022.00023
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The use of artificial intelligence (AI) to train conversational chatbots in the nuances of human interactions raises the concern of whether chatbots will demonstrate prejudice similar to that of humans, and thus require bias training. Ideally, a chatbot is void of racism, sexism, or any other offensive speech, however several well-known public instances indicate otherwise (e.g., Microsoft Taybot). In this paper, we explore the mechanisms of how open source conversational chatbots can learn bias, and we investigate potential solutions to mitigate this learned bias. To this end, we developed the Chatbot Bias Assessment Framework to measure bias in conversational chatbots, and then we devised an approach based on counter-stereotypic imagining to reduce this bias. This approach is non-intrusive to the chatbot, since it does not require altering any AI code or deleting any data from the original training dataset.
引用
收藏
页码:117 / 123
页数:7
相关论文
共 16 条
  • [1] [Anonymous], GUNTHERCOX
  • [2] [Anonymous], CHATTERBOT WEBSITE L
  • [3] Brownlee J, 2016, Machine Learning Mastery
  • [4] How to Keep an Online Learning Chatbot From Being Corrupted
    Chai, Yixuan
    Liu, Guohua
    Jin, Ziwei
    Sun, Donghong
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [5] Fauzia M., 2021, US TODAY 0728
  • [6] Han X, 2021, ACM CHI VIRTUAL C HU
  • [7] Hanu L., 2021, SCI AM ONLINE 0208
  • [8] Kempf A., 2020, Taboo: The Journal of Culture and Education, V19
  • [9] Khan A, WHY IS ED IND OPTING
  • [10] McKay T, NO FACEBOOK DID NOT