Dual use concerns of generative AI and large language models

被引:8
作者
Grinbaum, Alexei [1 ,2 ]
Adomaitis, Laurynas [1 ]
机构
[1] CEA Saclay, LARSIM, Gif Sur Yvette, France
[2] CEA Saclay, Larsim, F-91191 Gif Sur Yvette, France
关键词
Dual Use Research of Concern (DURC); generative AI; Large Language Models (LLMs); AI ethics; TRANSMISSION; ETHICS; VIRUS;
D O I
10.1080/23299460.2024.2304381
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
We suggest the implementation of the Dual Use Research of Concern (DURC) framework, originally designed for life sciences, to the domain of generative AI, with a specific focus on Large Language Models (LLMs). With its demonstrated advantages and drawbacks in biological research, we believe the DURC criteria can be effectively redefined for LLMs, potentially contributing to improved AI governance. Acknowledging the balance that must be struck when employing the DURC framework, we highlight its crucial political role in enhancing societal awareness of the impact of generative AI. As a final point, we offer a series of specific recommendations for applying the DURC approach to LLM research.
引用
收藏
页数:18
相关论文
共 91 条
[1]  
Aaronson S., 2022, Shtetl-Optimized
[2]  
Adomaitis L., 2022, TechEthos Project Deliverable
[3]  
AI Safety Summit, 2023, BLETCHL DECL COUNTR
[4]  
[Anonymous], 2007, Proposed Framework for the Oversight of Dual Use Life Sciences Research: Strategies for Minimizing the Potential Misuse of Research Information
[5]  
[Anonymous], 2022, Proposal for a Directive concerning urban wastewater treatment (recast)
[6]  
[Anonymous], Regulation (EU) 2021/818 of the European Parliament and of the Council of 20 May 2021 establishing the Creative Europe Programme (2021 to 2027) and repealing Regulation (EU) No 1295/2013
[7]  
[Anonymous], 2023, ChatGPT and Large Language Models: What's the Risk?
[8]  
Anthropic, 2023, Frontier Threats Red Teaming for AI Safety
[9]   Public health - National security and the biological research community [J].
Atlas, RM .
SCIENCE, 2002, 298 (5594) :753-754
[10]  
Bagdasaryan E, 2022, P IEEE S SECUR PRIV, P769, DOI [10.1109/SP46214.2022.9833572, 10.1109/SP46214.2022.00103]