Machine Advisors: Integrating Large Language Models Into Democratic Assemblies

被引:0
|
作者
Specian, Petr [1 ,2 ]
机构
[1] Prague Univ Econ & Business, Fac Econ, Dept Philosophy, W Churchill Sq 1938-4, Prague 3, 13067, Czech Republic
[2] Charles Univ Prague, Fac Humanities, Dept Psychol & Life Sci, Prague, Czech Republic
关键词
Large language models; epistemic democracy; institutional design; artificial intelligence;
D O I
10.1080/02691728.2024.2379271
中图分类号
N09 [自然科学史]; B [哲学、宗教];
学科分类号
01 ; 0101 ; 010108 ; 060207 ; 060305 ; 0712 ;
摘要
Could the employment of large language models (LLMs) in place of human advisors improve the problem-solving ability of democratic assemblies? LLMs represent the most significant recent incarnation of artificial intelligence and could change the future of democratic governance. This paper assesses their potential to serve as expert advisors to democratic representatives. While LLMs promise enhanced expertise availability and accessibility, they also present specific challenges. These include hallucinations, misalignment and value imposition. After weighing LLMs' benefits and drawbacks against human advisors, I argue that time-tested democratic procedures, such as deliberation and aggregation by voting, provide safeguards that are effective against human and machine advisor shortcomings alike. Additional protective measures may include custom training for advisor LLMs or boosting representatives' competencies in query formulation. Implementation of adversarial proceedings in which LLM advisors would debate each other and provide dissenting opinions is likely to yield further epistemic benefits. Overall, promising interventions that would mitigate the LLM risks appear feasible. Machine advisors could thus empower human decision-makers to make more autonomous, higher-quality decisions. On this basis, I defend the hypothesis that LLMs' careful integration into policymaking could augment democracy's ability to address today's complex social problems.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Evaluating Large Language Models for Material Selection
    Grandi, Daniele
    Jain, Yash Patawari
    Groom, Allin
    Cramer, Brandon
    Mccomb, Christopher
    JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2025, 25 (02)
  • [32] Large language models: a primer and gastroenterology applications
    Shahab, Omer
    El Kurdi, Bara
    Shaukat, Aasma
    Nadkarni, Girish
    Soroush, Ali
    THERAPEUTIC ADVANCES IN GASTROENTEROLOGY, 2024, 17
  • [33] Risks and Benefits of Large Language Models for the Environment
    Rillig, Matthias C.
    Agerstrand, Marlene
    Bi, Mohan
    Gould, Kenneth A.
    Sauerland, Uli
    ENVIRONMENTAL SCIENCE & TECHNOLOGY, 2023, : 3464 - 3466
  • [34] Vision of the future: large language models in ophthalmology
    Tailor, Prashant D.
    D'Souza, Haley S.
    Li, Hanzhou
    Starr, Matthew R.
    CURRENT OPINION IN OPHTHALMOLOGY, 2024, 35 (05) : 391 - 402
  • [35] Can large language models apply the law?
    Marcos, Henrique
    AI & SOCIETY, 2024,
  • [36] Trend Analysis Through Large Language Models
    Alzapiedi, Lucas
    Bihl, Trevor
    IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE, NAECON 2024, 2024, : 370 - 374
  • [37] Large language models and the treaty interpretation game
    Nelson, Jack Wright
    CAMBRIDGE INTERNATIONAL LAW JOURNAL, 2023, 12 (02) : 305 - 327
  • [38] EVALUATING LARGE LANGUAGE MODELS ON THEIR ACCURACY AND COMPLETENESS
    Edalat, Camellia
    Kirupaharan, Nila
    Dalvin, Lauren A.
    Mishra, Kapil
    Marshall, Rayna
    Xu, Hannah
    Francis, Jasmine H.
    Berkenstock, Meghan
    RETINA-THE JOURNAL OF RETINAL AND VITREOUS DISEASES, 2025, 45 (01): : 128 - 132
  • [39] Benchmarking Large Language Models: Opportunities and Challenges
    Hodak, Miro
    Ellison, David
    Van Buren, Chris
    Jiang, Xiaotong
    Dholakia, Ajay
    PERFORMANCE EVALUATION AND BENCHMARKING, TPCTC 2023, 2024, 14247 : 77 - 89
  • [40] Performance and Accuracy Research of the Large Language Models
    Gaitan, Nicoleta Cristina
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (08) : 62 - 69