Large language models: assessment for singularity

被引:0
|
作者
Ishizaki, Ryunosuke [1 ,2 ]
Sugiyama, Mahito [1 ,2 ]
机构
[1] Natl Inst Informat, Tokyo, Japan
[2] Grad Univ Adv Studies, Tokyo, Japan
关键词
LLM; AI; Singularity; RSI;
D O I
10.1007/s00146-025-02271-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The potential for large language models (LLMs) to attain technological singularity-the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself-is a critical concern in AI research. This paper explores the possibility of current LLMs achieving singularity by mentioning some frequently discussed philosophical issues within AI ethics and providing a theoretical framework for recursively self-improvement LLM. In this paper, we discuss the singularity phenomena that have been predicted in the past by pioneers of computer science and philosophers; how an intelligence explosion could be realized through the elemental technologies in modern computer science; and to what extent the metaphysical phenomena speculated by philosophers have actually approached reality. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then propose a theoretical framework to assess whether existing LLM technologies could satisfy the conditions for singularity, with a focus on recursive self-improvement (RSI) and autonomous code generation. We integrate key component technologies, such as reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO), into our analysis, illustrating how these could enable LLMs to independently enhance their reasoning and problem-solving capabilities. By mapping out a potential singularity model lifecycle and examining the dynamics of exponential growth models, we elucidate the conditions under which LLMs might self-replicate and rapidly escalate their intelligence. We conclude with a discussion of the ethical and safety implications of such developments, underscoring the need for responsible and controlled advancement in AI research to mitigate existential risks. Our work aims to contribute to the ongoing dialogue on the future of AI and the critical importance of proactive measures to ensure its beneficial development.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Do large language models "understand" their knowledge?
    Venkatasubramanian, Venkat
    AICHE JOURNAL, 2025, 71 (03)
  • [32] Social Value Alignment in Large Language Models
    Abbol, Giulio Antonio
    Marchesi, Serena
    Wykowska, Agnieszka
    Belpaeme, Tony
    VALUE ENGINEERING IN ARTIFICIAL INTELLIGENCE, VALE 2023, 2024, 14520 : 83 - 97
  • [33] Editorial: Large language models in work and business
    Seker, Sadi Evren
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [34] Large Language Models and the Future of Organization Theory
    Cornelissen, Joep
    Hollerer, Markus A.
    Boxenbaum, Eva
    Faraj, Samer
    Gehman, Joel
    ORGANIZATION THEORY, 2024, 5 (01):
  • [35] LARGE LANGUAGE MODELS (LLMS) AND CHATGPT FOR BIOMEDICINE
    Arighi, Cecilia
    Brenner, Steven
    Lu, Zhiyong
    BIOCOMPUTING 2024, PSB 2024, 2024, : 641 - 644
  • [36] Performance of Recent Large Language Models for a Low-Resourced Language
    Jayakody, Ravindu
    Dias, Gihan
    2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 162 - 167
  • [37] Circling the void: Using Heidegger and Lacan to think about large language models
    Heimann, Marc
    Huebener, Anne-Friederike
    COGNITIVE SYSTEMS RESEARCH, 2025, 91
  • [38] Assessing the Current Limitations of Large Language Models in Advancing Health Care Education
    Kim, Jaeyong
    Vajravelu, Bathri Narayan
    JMIR FORMATIVE RESEARCH, 2025, 9
  • [39] Accelerated evidence synthesis in orthopaedics-the roles of natural language processing, expert annotation and large language models
    Zsidai, Balint
    Kaarre, Janina
    Hilkert, Ann-Sophie
    Narup, Eric
    Senorski, Eric Hamrin
    Grassi, Alberto
    Ayeni, Olufemi R.
    Musahl, Volker
    Ley, Christophe
    Herbst, Elmar
    Hirschmann, Michael T.
    Kopf, Sebastian
    Seil, Romain
    Tischer, Thomas
    Samuelsson, Kristian
    Feldt, Robert
    JOURNAL OF EXPERIMENTAL ORTHOPAEDICS, 2023, 10 (01)
  • [40] ChatGPT Game Jam: Unleashing the Power of Large Language Models for Game Jams
    Grow, April
    Khosmood, Foaad
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON GAME JAMS, HACKATHONS AND GAME CREATION EVENTS, ICGJ 2023, 2023, : 51 - 54