Foundation and large language models: fundamentals, challenges, opportunities, and social impacts

被引:0
|
作者
Devon Myers
Rami Mohawesh
Venkata Ishwarya Chellaboina
Anantha Lakshmi Sathvik
Praveen Venkatesh
Yi-Hui Ho
Hanna Henshaw
Muna Alhawawreh
David Berdik
Yaser Jararweh
机构
[1] Duquesne University,
[2] Al Ain University,undefined
[3] Deakin University,undefined
来源
Cluster Computing | 2024年 / 27卷
关键词
Natural language processing; Foundation models; Large language models; Advanced pre-trained models; Artificial intelligence; Machine learning;
D O I
暂无
中图分类号
学科分类号
摘要
Foundation and Large Language Models (FLLMs) are models that are trained using a massive amount of data with the intent to perform a variety of downstream tasks. FLLMs are very promising drivers for different domains, such as Natural Language Processing (NLP) and other AI-related applications. These models emerged as a result of the AI paradigm shift, involving the use of pre-trained language models (PLMs) and extensive data to train transformer models. FLLMs have also demonstrated impressive proficiency in addressing a wide range of NLP applications, including language generation, summarization, comprehension, complex reasoning, and question answering, among others. In recent years, there has been unprecedented interest in FLLMs-related research, driven by contributions from both academic institutions and industry players. Notably, the development of ChatGPT, a highly capable AI chatbot built around FLLMs concepts, has garnered considerable interest from various segments of society. The technological advancement of large language models (LLMs) has had a significant influence on the broader artificial intelligence (AI) community, potentially transforming the processes involved in the development and use of AI systems. Our study provides a comprehensive survey of existing resources related to the development of FLLMs and addresses current concerns, challenges and social impacts. Moreover, we emphasize on the current research gaps and potential future directions in this emerging and promising field.
引用
收藏
页码:1 / 26
页数:25
相关论文
共 50 条
  • [31] The new paradigm in machine learning - foundation models, large language models and beyond: a primer for physicians
    Scott, Ian A.
    Zuccon, Guido
    INTERNAL MEDICINE JOURNAL, 2024, 54 (05) : 705 - 715
  • [32] Opportunities for large language models and discourse in engineering design
    Goepfert, Jan
    Weinand, Jann M.
    Kuckertz, Patrick
    Stolten, Detlef
    ENERGY AND AI, 2024, 17
  • [33] Foundation models in smart agriculture: Basics, opportunities, and challenges
    Li, Jiajia
    Xu, Mingle
    Xiang, Lirong
    Chen, Dong
    Zhuang, Weichao
    Yin, Xunyuan
    Li, Zhaojian
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 222
  • [34] Large language models (LLM) in computational social science: prospects, current state, and challenges
    Thapa, Surendrabikram
    Shiwakoti, Shuvam
    Shah, Siddhant Bikram
    Adhikari, Surabhi
    Veeramani, Hariram
    Nasim, Mehwish
    Naseem, Usman
    SOCIAL NETWORK ANALYSIS AND MINING, 2025, 15 (01)
  • [35] The Opportunities and Risks of Large Language Models in Mental Health
    Lawrence, Hannah R.
    Schneider, Renee A.
    Rubin, Susan B.
    Mataric, Maja J.
    McDuff, Daniel J.
    Bell, Megan Jones
    JMIR MENTAL HEALTH, 2024, 11
  • [36] On the Opportunities and Challenges of Foundation Models for GeoAI (Vision Paper)
    Mai, Gengchen
    Huang, Weiming
    Sun, Jin
    Song, Suhang
    Mishra, Deepak
    Liu, Ninghao
    Gao, Song
    Liu, Tianming
    Cong, Gao
    Hu, Yingjie
    Cundy, Chris
    Li, Ziyuan
    Zhu, Rui
    Lao, Ni
    ACM TRANSACTIONS ON SPATIAL ALGORITHMS AND SYSTEMS, 2024, 10 (02)
  • [37] Perils and opportunities in using large language models in psychological research
    Abdurahman, Suhaib
    Atari, Mohammad
    Karimi-Malekabadi, Farzan
    Xue, Mona J.
    Trager, Jackson
    Park, Peter S.
    Golazizian, Preni
    Omrani, Ali
    Dehghani, Morteza
    PNAS NEXUS, 2024, 3 (07):
  • [38] Challenges and Opportunities in Neuro-Symbolic Composition of Foundation Models
    Jha, Susmit
    Roy, Anirban
    Cobb, Adam
    Berenbeim, Alexander
    Bastian, Nathaniel D.
    MILCOM 2023 - 2023 IEEE MILITARY COMMUNICATIONS CONFERENCE, 2023,
  • [39] Debiasing large language models: research opportunities
    Yogarajan, Vithya
    Dobbie, Gillian
    Keegan, Te Taka
    JOURNAL OF THE ROYAL SOCIETY OF NEW ZEALAND, 2025, 55 (02) : 372 - 395
  • [40] Large language models and multimodal foundation models for precision oncology
    Truhn, Daniel
    Eckardt, Jan-Niklas
    Ferber, Dyke
    Kather, Jakob Nikolas
    NPJ PRECISION ONCOLOGY, 2024, 8 (01)