Decision-making power and responsibility in an automated administration

被引:0
作者
Langer, Charlotte [1 ]
机构
[1] Leipzig, Germany
来源
Discover Artificial Intelligence | 2024年 / 4卷 / 01期
关键词
Administration; Artificial intelligence; Automated decision-making; Non-delegation; Public service; Rule of law;
D O I
10.1007/s44163-024-00152-1
中图分类号
学科分类号
摘要
The paper casts a spotlight on one of the manifold legal questions that arise with the proliferation of artificial intelligence. This new technology is attractive for many fields, including public administration, where it promises greater accuracy and efficiency, freeing up resources for better interaction and engagement with citizens. However, public powers are bound by certain constitutional constraints that must be observed, regardless of whether decisions are made by humans or machines. This includes the non-delegation principle, which aims to limit the delegation and sub-delegation of decisions affecting citizens’ rights in order to ensure governmental accountability, reviewability, and contestability. This puts some constraints on the automation of decision-making by public entities, as algorithmic decision-making entails delegating decisions to software development companies on the one hand, and to algorithms on the other. The present paper reveals and explains these constraints and concludes with suggestions to navigate these conflicts in a manner that satisfies the rule of law while maximizing the benefits of new technologies. © The Author(s) 2024.
引用
收藏
相关论文
共 56 条
[1]  
Andersson C., Hallin A., Ivory C., Unpacking the digitalisation of public services: Configuring work during automation in local government, Government Inform Quarterly, (2021)
[2]  
Bamberger K., Regulation as delegation: private firms, decisionmaking, and accountability in the administrative state, Duke Law J, 56, 2, pp. 377-468, (2006)
[3]  
Barocas S., Selbst A.D., Big data's disparate impact, Calif L Rev, 104, 3, pp. 671-732, (2016)
[4]  
Mao F., Robodebt: Illegal Australian welfare hunt drove people to despair, BBC News, (2023)
[5]  
Binns R., Veale M., Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR, Int Data Privacy Law, (2021)
[6]  
Bresciani P.F., Palmirani M., Constitutional opportunities and risks of AI in the Law-making process, Federalismi It, 2, pp. 1-18, (2024)
[7]  
Braun B.N., Künstliche intelligenz und automatisierte entscheidungen in der öffentlichen verwaltung, SJZ/RSJ, 115, 15, pp. 467-476, (2019)
[8]  
Burrell J., How the machine ‘thinks’: understanding opacity in machine learning algorithms, Big Data Soc, (2016)
[9]  
Bverfge, 49
[10]  
Bverfge, 84