The Problem of AI Hallucination and How to Solve It

被引:0
|
作者
Jancarik, Antonin [1 ]
Dusek, Ondrej [2 ]
机构
[1] Charles Univ Prague, Fac Educ, Prague, Czech Republic
[2] Charles Univ Prague, Fac Math & Phys, Prague, Czech Republic
来源
PROCEEDINGS OF THE 23RD EUROPEAN CONFERENCE ON E-LEARNING, ECEL 2024 | 2024年 / 23/1卷
关键词
Chatbots; AI; Mathematics education; Hallucination; ARTIFICIAL-INTELLIGENCE;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
AI in education is a topic that has been researched for the last 70 years. However, the last two years have seen very significant changes. These changes relate to the introduction of OpenAI's ChatGPT chatbot in November 2022. The GPT (Generative Pre-trained Transformer) language model has dramatically influenced how the public approaches artificial intelligence. For many, generative language models have become synonymous with AI and have come uncritically viewed as a universal source of answers to most questions. However, it soon became apparent that even generative language models had their limits. Among the main problems that emerged was hallucination (providing answers containing false or misleading information), which is expected in all language models. The main problem of hallucination is that this information is difficult to distinguish from other information, and AI language models are very persuasive in presenting it. The risks of this phenomenon are much more substantial when using language modules to support learning, where the learner cannot distinguish correct information from incorrect information. The proposed paper focuses on the area of AI hallucination in mathematics education. It will first show how AI chatbots hallucinate in mathematics and then present one possible solution to counter this hallucination. The presented solution was created for the AI chatbot Edu-AI and designed to tutor students in mathematics. Usually, the problem is approached so that the system verifies the correctness of the output offered by the chatbot. Within the Edu-AI, checking responses is not implemented, but checking inputs is. If an input containing a factual query is recorded, it is redirected, and the answer is traced to authorised knowledge sources and study materials. If a relevant answer cannot be traced in these sources, a redirect to a natural person who will address the question is offered. In addition to describing the technical solution, the article includes concrete examples of how the system works. This solution has been developed for the educational domain but applies to all domains where users must be provided with relevant information.
引用
收藏
页码:122 / 128
页数:7
相关论文
共 50 条
  • [31] The Frame Problem: An AI Fairy Tale
    Kevin B. Korb
    Minds and Machines, 1998, 8 : 317 - 351
  • [32] AI in Early Learning and How it Affects Higher Education and Workplace Learning
    Dickelman, Gary J.
    Greenberg, Jan D.
    CREATIVE APPROACHES TO TECHNOLOGY-ENHANCED LEARNING FOR THE WORKPLACE AND HIGHER EDUCATION, VOL 1, TLIC 2024, 2024, 1150 : 124 - 148
  • [33] Negotiating the authenticity of AI: how the discourse on AI rejects human indeterminacy
    Beerends, Siri
    Aydin, Ciano
    AI & SOCIETY, 2024, 40 (2) : 263 - 276
  • [34] Constructing AI: Examining how AI is shaped by data, models and people
    Ingram, Katrina
    INTERNATIONAL REVIEW OF INFORMATION ETHICS, 2021, 29
  • [35] How will Generative AI impact communication?
    Gans, Joshua S.
    ECONOMICS LETTERS, 2024, 242
  • [36] How AI Systems Can Be Blameworthy
    Altehenger, Hannah
    Menges, Leonhard
    Schulte, Peter
    PHILOSOPHIA, 2024, 52 (04) : 1083 - 1106
  • [37] How AI can be a force for 'not bad'
    Smith N.
    Engineering and Technology, 2022, 17 (08) : 95 - 99
  • [38] AI colleagues: how AI influences hotel employees' service performance?
    Wang, Tong
    Aw, Eugene Cheng-Xi
    Tan, Garry Wei-Han
    Sthapit, Erose
    Li, Xi
    CURRENT ISSUES IN TOURISM, 2025,
  • [39] How AI can be a force for good
    Taddeo, Mariarosaria
    Floridi, Luciano
    SCIENCE, 2018, 361 (6404) : 751 - 752
  • [40] HOW AI CAN AID BIOETHICS
    Sinnott-Armstrong, Walter
    Skorburg, Joshua August
    JOURNAL OF PRACTICAL ETHICS, 2021, 9 (01):