Mitigating Insecure Outputs in Large Language Models(LLMs): A Practical Educational Module

被引:0
|
作者
Barek, Md Abdul [1 ]
Rahman, Md Mostafizur [2 ]
Akter, Mst Shapna [1 ]
Riad, A. B. M. Kamrul Islam [1 ]
Rahman, Md Abdur [1 ]
Shahriar, Hossain [3 ]
Rahman, Akond [4 ]
Wu, Fan [5 ]
机构
[1] Univ West Florida, Dept Intelligent Syst & Robot, Pensacola, FL 32514 USA
[2] Univ West Florida, Dept Cybersecur & Informat Technol, Pensacola, FL USA
[3] Univ West Florida, Ctr Cybersecur, Pensacola, FL USA
[4] Auburn Univ, Comp Sci & Software Engn, Auburn, AL USA
[5] Tuskegee Univ, Dept Comp Sci, Tuskegee, AL USA
来源
2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024 | 2024年
基金
美国国家科学基金会;
关键词
Large Language Models; Cybersecurity; Insecure Output; Sanitization; Authentic Learning;
D O I
10.1109/COMPSAC61105.2024.00389
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs) have extensive ability to produce promising output. Nowadays, people are increasingly relying on them due to easy accessibility, rapid and outstanding outcomes. However, the use of these results without appropriate scrutiny poses serious security risks, particularly when they are integrated with other software, APIs, or plugins. This is because the LLM outputs are highly dependent on the prompts they receive. Therefore, it is essential to carefully clean these outputs before using them in additional software environments. This paper is designed to teach students about the potential dangers of contaminated LLM output within the context of web development through prelab, handson, and postlab experiences. Hands-on lab provides practical guidance on how to handle LLM vulnerabilities to make applications safe with some real-world examples in Python. This approach aims to provide students with a deeper understanding of the precautions necessary to ensure software against the vulnerabilities introduced by LLM output.
引用
收藏
页码:2424 / 2429
页数:6
相关论文
共 50 条
  • [31] Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions
    Chen, Yufan
    Arunasalam, Arjun
    Celik, Z. Berkay
    39TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2023, 2023, : 366 - 378
  • [32] A Survey of Lay People's Willingness to Generate Legal Advice using Large Language Models (LLMs)
    Seabrooke, Tina
    Schneiders, Eike
    Dowthwaite, Liz
    Krook, Joshua
    Leesakul, Natalie
    Cios, Jeremie
    Maior, Horia
    Fischer, Joel
    PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TRUSTWORTHY AUTONOMOUS SYSTEMS, TAS 2024, 2024,
  • [33] Large Language Models: Pioneering New Educational Frontiers in Childhood Myopia
    Mohammad Delsoz
    Amr Hassan
    Amin Nabavi
    Amir Rahdar
    Brian Fowler
    Natalie C. Kerr
    Lauren Claire Ditta
    Mary E. Hoehn
    Margaret M. DeAngelis
    Andrzej Grzybowski
    Yih-Chung Tham
    Siamak Yousefi
    Ophthalmology and Therapy, 2025, 14 (6) : 1281 - 1295
  • [34] Using Large Language Models to Develop Readability Formulas for Educational Settings
    Crossley, Scott
    Choi, Joon Suh
    Scherber, Yanisa
    Lucka, Mathis
    ARTIFICIAL INTELLIGENCE IN EDUCATION. POSTERS AND LATE BREAKING RESULTS, WORKSHOPS AND TUTORIALS, INDUSTRY AND INNOVATION TRACKS, PRACTITIONERS, DOCTORAL CONSORTIUM AND BLUE SKY, AIED 2023, 2023, 1831 : 422 - 427
  • [35] Exploring the Behavior and Performance of Large Language Models: Can LLMs Infer Answers to Questions Involving Restricted Information?
    Cadena-Bautista, Angel
    Lopez-Ponce, Francisco F.
    Ojeda-Trueba, Sergio Luis
    Sierra, Gerardo
    Bel-Enguix, Gemma
    INFORMATION, 2025, 16 (02)
  • [36] Prejudiced interactions with large language models (LLMs) reduce trustworthiness and behavioral intentions among members of stigmatized groups
    Petzel, Zachary W.
    Sowerby, Leanne
    COMPUTERS IN HUMAN BEHAVIOR, 2025, 165
  • [37] Mitigating Grand Challenges in Life Cycle Inventory Modeling through the Applications of Large Language Models
    Tu, Qingshi
    Guo, Jing
    Li, Nan
    Qi, Jianchuan
    Xu, Ming
    ENVIRONMENTAL SCIENCE & TECHNOLOGY, 2024, 58 (44) : 19595 - 19603
  • [38] Practical and ethical challenges of large language models in education: A systematic scoping review
    Yan, Lixiang
    Sha, Lele
    Zhao, Linxuan
    Li, Yuheng
    Martinez-Maldonado, Roberto
    Chen, Guanliang
    Li, Xinyu
    Jin, Yueqiao
    Gasevic, Dragan
    BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, 2024, 55 (01) : 90 - 112
  • [39] Multimodal Large Language Models in Human-Centered Health: Practical Insights
    Dang, Ting
    Jia, Hong
    IEEE PERVASIVE COMPUTING, 2024, 23 (04) : 87 - 93
  • [40] Potential use of large language models for mitigating students' problematic social media use: ChatGPT as an example
    Liu, Xin-Qiao
    Zhang, Zi-Ru
    WORLD JOURNAL OF PSYCHIATRY, 2024, 14 (03):