Evolution and Prospects of Foundation Models: From Large Language Models to Large Multimodal Models

被引:16
作者
Chen, Zheyi [1 ]
Xu, Liuchang [1 ]
Zheng, Hongting [1 ]
Chen, Luyao [1 ]
Tolba, Amr [2 ,3 ]
Zhao, Liang [4 ]
Yu, Keping [5 ]
Feng, Hailin [1 ]
机构
[1] Zhejiang A&F Univ, Coll Math & Comp Sci, Hangzhou 311300, Peoples R China
[2] King Saud Univ, Community Coll, Comp Sci Dept, Riyadh 11437, Saudi Arabia
[3] Menoufia Univ, Fac Sci, Math & Comp Sci Dept, Shibin Al Kawm 32511, Menoufia Govern, Egypt
[4] Shenyang Aerosp Univ, Sch Comp Sci, Shenyang 110136, Peoples R China
[5] Hosei Univ, Grad Sch Sci & Engn, Tokyo 1848584, Japan
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2024年 / 80卷 / 02期
关键词
Artificial intelligence; large language models; large multimodal models; foundation models;
D O I
10.32604/cmc.2024.052618
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Since the 1950s, when the Turing Test was introduced, there has been notable progress in machine language intelligence. Language modeling, crucial for AI development, has evolved from statistical to neural models over the last two decades. Recently, transformer-based Pre-trained Language Models (PLM) have excelled in Natural Language Processing (NLP) tasks by leveraging large-scale training corpora. Increasing the scale of these models enhances performance significantly, introducing abilities like context learning that smaller models lack. The advancement in Large Language Models, exemplified by the development of ChatGPT, has made significant impacts both academically and industrially, capturing widespread societal interest. This survey provides an overview of the development and prospects from Large Language Models (LLM) to Large Multimodal Models (LMM). It first discusses the contributions and technological advancements of LLMs in the field of natural language processing, especially in text generation and language understanding. Then, it turns to the discussion of LMMs, which integrates various data modalities such as text, images, and sound, demonstrating advanced capabilities in understanding and generating cross-modal content, paving new pathways for the adaptability and flexibility of AI systems. Finally, the survey highlights the prospects of LMMs in terms of technological development and application potential, while also pointing out challenges in data integration, cross-modal understanding accuracy, providing a comprehensive perspective on the latest developments in this field.
引用
收藏
页码:1753 / 1808
页数:56
相关论文
共 324 条
[1]   nocaps: novel object captioning at scale [J].
Agrawal, Harsh ;
Desai, Karan ;
Wang, Yufei ;
Chen, Xinlei ;
Jain, Rishabh ;
Johnson, Mark ;
Batra, Dhruv ;
Parikh, Devi ;
Lee, Stefan ;
Anderson, Peter .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :8947-8956
[2]  
Alayrac JB, 2022, ADV NEUR IN
[3]   Large language models (LLM) and ChatGPT: what will the impact on nuclear medicine be? [J].
Alberts, Ian L. ;
Mercolli, Lorenzo ;
Pyka, Thomas ;
Prenosil, George ;
Shi, Kuangyu ;
Rominger, Axel ;
Afshar-Oromieh, Ali .
EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, 2023, 50 (06) :1549-1552
[4]   A Review on Innovation in Healthcare Sector (Telehealth) through Artificial Intelligence [J].
Amjad, Ayesha ;
Kordel, Piotr ;
Fernandes, Gabriela .
SUSTAINABILITY, 2023, 15 (08)
[5]  
Anil R, 2023, Arxiv, DOI [arXiv:2305.10403, DOI 10.48550/ARXIV.2305.10403, 10.48550/arXiv.2305.10403]
[6]  
Bach S.H., 2022, Promptsource: An integrated development environment and repository for natural language prompts
[7]  
Bai JZ, 2023, Arxiv, DOI arXiv:2308.12966
[8]  
Bai Yuntao, 2022, arXiv
[9]   Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [J].
Bain, Max ;
Nagrani, Arsha ;
Varol, Gul ;
Zisserman, Andrew .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :1708-1718
[10]  
Baumgartner J., 2020, P INT AAAI C WEB SOC, P830, DOI DOI 10.48550/ARXIV.2001.08435