Industrial applications of large language models

被引:1
作者
Raza, Mubashar [1 ]
Jahangir, Zarmina [2 ]
Riaz, Muhammad Bilal [4 ,5 ]
Saeed, Muhammad Jasim [2 ]
Sattar, Muhammad Awais [3 ]
机构
[1] COMSATS Univ, Dept Comp Sci, Sahiwal Campus, Islamabad, Pakistan
[2] Riphah Int Univ, Dept Comp Sci, Lahore Campus, Lahore, Pakistan
[3] Lulea Univ Technol, Dept Comp Sci Elect & Space Engn, Lulea, Sweden
[4] VSB Tech Univ Ostrava, IT4Innovat, Ostrava, Czech Republic
[5] Appl Sci Private Univ, Appl Sci Res Ctr, Amman, Jordan
关键词
Large Language models; LLMs; NLP; Transformers;
D O I
10.1038/s41598-025-98483-1
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Large language models (LLMs) are artificial intelligence (AI) based computational models designed to understand and generate human like text. With billions of training parameters, LLMs excel in identifying intricate language patterns, enabling remarkable performance across a variety of natural language processing (NLP) tasks. After the introduction of transformer architectures, they are impacting the industry with their text generation capabilities. LLMs play an innovative role across various industries by automating NLP tasks. In healthcare, they assist in diagnosing diseases, personalizing treatment plans, and managing patient data. LLMs provide predictive maintenance in automotive industry. LLMs provide recommendation systems, and consumer behavior analyzers. LLMs facilitates researchers and offer personalized learning experiences in education. In finance and banking, LLMs are used for fraud detection, customer service automation, and risk management. LLMs are driving significant advancements across the industries by automating tasks, improving accuracy, and providing deeper insights. Despite these advancements, LLMs face challenges such as ethical concerns, biases in training data, and significant computational resource requirements, which must be addressed to ensure impartial and sustainable deployment. This study provides a comprehensive analysis of LLMs, their evolution, and their diverse applications across industries, offering researchers valuable insights into their transformative potential and the accompanying limitations.
引用
收藏
页数:23
相关论文
共 260 条
[1]   Let the LLMs Talk: Simulating Human-to-Human Conversational QA via Zero-Shot LLM-to-LLM Interactions [J].
Abbasiantaeb, Zahra ;
Yuan, Yifei ;
Kanoulas, Evangelos ;
Aliannejadi, Mohammad .
PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, :8-17
[2]  
Abdali S, 2024, Arxiv, DOI arXiv:2407.20529
[3]  
Abubakar HD., 2022, SLU J SCI TECHNOL, V4, P27, DOI [10.56471/slujst.v4i.266, DOI 10.56471/SLUJST.V4I.266]
[4]  
Adawiyah AR., 2023, TEKNOSASTIK, V21, P23, DOI [10.33365/ts.v21i1.2339, DOI 10.33365/TS.V21I1.2339]
[5]   COMPARATIVE ANALYSES OF BERT, ROBERTA, DISTILBERT, AND XLNET FOR TEXT-BASED EMOTION RECOGNITION [J].
Adoma, Acheampong Francisca ;
Henry, Nunoo-Mensah ;
Chen, Wenyu .
2020 17TH INTERNATIONAL COMPUTER CONFERENCE ON WAVELET ACTIVE MEDIA TECHNOLOGY AND INFORMATION PROCESSING (ICCWAMTIP), 2020, :117-121
[6]   Beyond Labels: Leveraging Deep Learning and LLMs for Content Metadata [J].
Agrawal, Saurabh ;
Trenkle, John ;
Kawale, Jaya .
PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, :74-77
[7]   Machine Learning for an Enhanced Credit Risk Analysis: A Comparative Study of Loan Approval Prediction Models Integrating Mental Health Data [J].
Alagic, Adnan ;
Zivic, Natasa ;
Kadusic, Esad ;
Hamzic, Dzenan ;
Hadzajlic, Narcisa ;
Dizdarevic, Mejra ;
Selmanovic, Elmedin .
MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (01) :53-77
[8]  
Anderson Carl, 2024, Semantic coherence dynamics in large language models through layered syntax-aware memory retention mechanism
[9]   CE-BERT: Concise and Efficient BERT-Based Model for Detecting Rumors on Twitter [J].
Anggrainingsih, Rini ;
Hassan, Ghulam Mubashar ;
Datta, Amitava .
IEEE ACCESS, 2023, 11 :80207-80217
[10]  
Aparicio V, 2024, Arxiv, DOI arXiv:2401.11011