BC4LLM: A perspective of trusted artificial intelligence when blockchain meets large language models

被引:6
作者
Luo, Haoxiang [1 ,2 ]
Luo, Jian [2 ]
Vasilakos, Athanasios V. [3 ]
机构
[1] Univ Elect Sci & Technol China, Key Lab Opt Fiber Sensing & Commun, Minist Educ, Chengdu 611731, Peoples R China
[2] Hainan Chain Shield Technol Co LTD, Haikou 570311, Peoples R China
[3] Imam Abdulrahman Bin Faisal Univ, Coll Comp Sci & Informat Technol, Dammam 31441, Saudi Arabia
基金
中国国家自然科学基金;
关键词
Trusted AI; Blockchain; Large language model; AI-generated content; ChatGPT; INTERNET; OPPORTUNITIES; CHALLENGES; CONSENSUS; SECURITY; PRIVACY;
D O I
10.1016/j.neucom.2024.128089
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, artificial intelligence (AI) and machine learning (ML) are reshaping society's production methods and productivity, and also changing the paradigm of scientific research. Among them, the AI language model represented by ChatGPT has made great progress. Such large language models (LLMs) serve people in the form of AI-generated content (AIGC) and are widely used in consulting, healthcare, and education. However, it is difficult to guarantee the authenticity and reliability of AIGC learning data. In addition, there are also hidden dangers of privacy disclosure in distributed AI training. Moreover, the content generated by LLMs is difficult to identify and trace, and it is difficult to cross-platform mutual recognition. The above information security issues in the coming era of AI powered by LLMs will be infinitely amplified and affect everyone's life. Therefore, we consider empowering LLMs using blockchain technology with superior security features to propose a vision for trusted AI. This survey mainly introduces the motivation and technical route of blockchain for LLM (BC4LLM), including reliable learning corpus, secure training process, and identifiable generated content. Meanwhile, this survey also reviews the potential applications and future challenges, especially in the frontier communication networks field, including network resource allocation, dynamic spectrum sharing, and semantic communication. Based on the above work combined with the prospect of blockchain and LLMs, it is expected to help the early realization of trusted AI and provide guidance for the academic community.
引用
收藏
页数:20
相关论文
共 198 条
  • [1] The benefits and threats of blockchain technology in healthcare: A scoping review
    Abu-elezz, Israa
    Hassan, Asma
    Nazeemudeen, Anjanarani
    Househ, Mowafa
    Abd-alrazaq, Alaa
    [J]. INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2020, 142
  • [2] Spectrum sensing in cognitive radio networks and metacognition for dynamic spectrum sharing between radar and communication system: A review
    Agarwal, Sumit Kumar
    Samant, Abhay
    Yadav, Sandeep Kumar
    [J]. PHYSICAL COMMUNICATION, 2022, 52
  • [3] Artificial Hallucinations in ChatGPT: Implications in Scientific Writing
    Alkaissi, Hussam
    McFarlane, Samy I.
    [J]. CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (02)
  • [4] Blockchain-Empowered Security and Energy Efficiency of Drone Swarm Consensus for Environment Exploration
    Alsamhi, Saeed Hamood
    Shvetsov, Alexey V. V.
    Shvetsova, Svetlana V. V.
    Hawbani, Ammar
    Guizani, Mohsen
    Alhartomi, Mohammed A. A.
    Ma, Ou
    [J]. IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2023, 7 (01): : 328 - 338
  • [5] Blockchain-based micro-credentialing system in higher education institutions: Systematic literature review
    Alsobhi, Hada A.
    Alakhtar, Rayed A.
    Ubaid, Ayesha
    Hussain, Omar K.
    Hussain, Farookh Khadeer
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 265
  • [6] Top-N Recommendation Algorithms: A Quest for the State-of-the-Art
    Anelli, Vito Walter
    Bellogin, Alejandro
    Di Noia, Tommaso
    Jannach, Dietmar
    Pomo, Claudio
    [J]. PROCEEDINGS OF THE 30TH ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, UMAP 2022, 2022, : 121 - 131
  • [7] [Anonymous], 2023, GPT milestone
  • [8] [Anonymous], 2023, GPT-3.5
  • [9] Araci D, 2019, Arxiv, DOI arXiv:1908.10063
  • [10] Arora S, 2023, Arxiv, DOI arXiv:2304.09433