Large language models: assessment for singularity

被引:0
|
作者
Ishizaki, Ryunosuke [1 ,2 ]
Sugiyama, Mahito [1 ,2 ]
机构
[1] Natl Inst Informat, Tokyo, Japan
[2] Grad Univ Adv Studies, Tokyo, Japan
关键词
LLM; AI; Singularity; RSI;
D O I
10.1007/s00146-025-02271-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The potential for large language models (LLMs) to attain technological singularity-the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself-is a critical concern in AI research. This paper explores the possibility of current LLMs achieving singularity by mentioning some frequently discussed philosophical issues within AI ethics and providing a theoretical framework for recursively self-improvement LLM. In this paper, we discuss the singularity phenomena that have been predicted in the past by pioneers of computer science and philosophers; how an intelligence explosion could be realized through the elemental technologies in modern computer science; and to what extent the metaphysical phenomena speculated by philosophers have actually approached reality. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then propose a theoretical framework to assess whether existing LLM technologies could satisfy the conditions for singularity, with a focus on recursive self-improvement (RSI) and autonomous code generation. We integrate key component technologies, such as reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO), into our analysis, illustrating how these could enable LLMs to independently enhance their reasoning and problem-solving capabilities. By mapping out a potential singularity model lifecycle and examining the dynamics of exponential growth models, we elucidate the conditions under which LLMs might self-replicate and rapidly escalate their intelligence. We conclude with a discussion of the ethical and safety implications of such developments, underscoring the need for responsible and controlled advancement in AI research to mitigate existential risks. Our work aims to contribute to the ongoing dialogue on the future of AI and the critical importance of proactive measures to ensure its beneficial development.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Chain of Stance: Stance Detection with Large Language Models
    Ma, Junxia
    Wang, Changjiang
    Xing, Hanwen
    Zhao, Dongming
    Zhang, Yazhou
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT V, NLPCC 2024, 2025, 15363 : 82 - 94
  • [42] How to write effective prompts for large language models
    Lin, Zhicheng
    NATURE HUMAN BEHAVIOUR, 2024, 8 (4) : 611 - 615
  • [43] Software Vulnerability Detection using Large Language Models
    Das Purba, Moumita
    Ghosh, Arpita
    Radford, Benjamin J.
    Chu, Bill
    2023 IEEE 34TH INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING WORKSHOPS, ISSREW, 2023, : 112 - 119
  • [44] The promise of AI Large Language Models for Epilepsy care
    Landais, Raphaelle
    Sultan, Mustafa
    Thomas, Rhys H.
    EPILEPSY & BEHAVIOR, 2024, 154
  • [45] Integrating large language models with internet of things: applications
    Mingyu Zong
    Arvin Hekmati
    Michael Guastalla
    Yiyi Li
    Bhaskar Krishnamachari
    Discover Internet of Things, 5 (1):
  • [46] Evaluating large language models in theory of mind tasks
    Kosinski, Michal
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2024, 121 (45)
  • [47] Economics and Equity of Large Language Models:Health CarePerspective
    Nagarajan, Radha
    Kondo, Midori
    Salas, Franz
    Sezgin, Emre
    Yao, Yuan
    Klotzman, Vanessa
    Godambe, Sandip A.
    Khan, Naqi
    Limon, Alfonso
    Stephenson, Graham
    Taraman, Sharief
    Walton, Nephi
    Ehwerhemuepha, Louis
    Pandit, Jay
    Pandita, Deepti
    Weiss, Michael
    Golden, Charles
    Gold, Adam
    Henderson, John
    Shippy, Angela
    Celi, Leo Anthony
    Hogan, William R.
    Oermann, Eric K.
    Sanger, Terence
    Martel, Steven
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26 : e64226
  • [48] A Survey of Testing Techniques Based on Large Language Models
    Qi, Fei
    Hou, Yingnan
    Lin, Ning
    Bao, Shanshan
    Xu, Nuo
    PROCEEDINGS OF 2024 INTERNATIONAL CONFERENCE ON COMPUTER AND MULTIMEDIA TECHNOLOGY, ICCMT 2024, 2024, : 280 - 284
  • [49] Bridging Eras: Transforming Fortran Legacies into Python']Python with the Power of Large Language Models
    Pietrini, Rocco
    Paolanti, Marina
    Frontoni, Emanuele
    2024 IEEE 3RD INTERNATIONAL CONFERENCE ON COMPUTING AND MACHINE INTELLIGENCE, ICMI 2024, 2024,
  • [50] The great automatic grammatizator: On the use and misuse of large language models in scientific and academic writing
    Kantor, Jonathan
    JAAD INTERNATIONAL, 2025, 18 : 79 - 80