Regulating ChatGPT and other Large Generative AI Models

被引:114
作者
Hacker, Philipp [1 ]
Engel, Andreas [2 ]
Mauer, Marco [3 ]
机构
[1] European Univ Viadrina, European New Sch Digital Studies, Frankfurt, Germany
[2] Heidelberg Univ, Heidelberg, Germany
[3] Humboldt Univ, Berlin, Germany
来源
PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023 | 2023年
关键词
ARTIFICIAL-INTELLIGENCE; OPPORTUNITIES; CHALLENGES; MARKET;
D O I
10.1145/3593013.3594067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. The paper argues for three layers of obligations concerning LGAIMs (minimum standards for all LGAIMs; high-risk obligations for high-risk use cases; collaborations along the AI value chain). In general, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA's content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers.
引用
收藏
页码:1112 / 1123
页数:12
相关论文
共 120 条
  • [1] ACM T. P. C, 2021, ACM TechBrief: Computing and Climate Change
  • [2] Directly Discriminatory Algorithms
    Adams-Prassl, Jeremias
    Binns, Reuben
    Kelly-Lyth, Aislinn
    [J]. MODERN LAW REVIEW, 2023, 86 (01) : 144 - 175
  • [3] Ananthaswamy A., 2023, The Physics Principle That Inspired Modern AI Art.
  • [4] [Anonymous], 2023, An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT.
  • [5] [Anonymous], 2023, KI-Bundesverband Large European AI Models (LEAM) as Leuchtturmprojekt fur Europa.
  • [6] [Anonymous], 2022, OECD Measuring the Environmental Impacts of AI Compute and Applications: The AI Footprint.
  • [7] The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare
    Aung, Yuri Y. M.
    Wong, David C. S.
    Ting, Daniel S. W.
    [J]. BRITISH MEDICAL BULLETIN, 2021, 139 (01) : 4 - 15
  • [8] Bai YT, 2022, Arxiv, DOI arXiv:2212.08073
  • [9] Balestriero R, 2023, Arxiv, DOI [arXiv:2304.12210, 10.48550/arXiv.2304.12210, DOI 10.48550/ARXIV.2304.12210]
  • [10] Studying Up: Reorienting the study of algorithmic fairness around issues of power
    Barabas, Chelsea
    Doyle, Cohn
    Rubinovitz, J. B.
    Dinakar, Karthik
    [J]. FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, : 167 - 176