The risks associated with Artificial General Intelligence: A systematic review

被引:81
作者
McLean, Scott [1 ]
Read, Gemma J. M. [1 ]
Thompson, Jason [1 ,2 ]
Baber, Chris [3 ]
Stanton, Neville A. [1 ]
Salmon, Paul M. [1 ]
机构
[1] Univ Sunshine Coast, Ctr Human Factors & Sociotech Syst, Sippy Downs, Qld, Australia
[2] Univ Melbourne, Melbourne Sch Design, Transport Hlth & Urban Design Thud Res Lab, Parkville, Vic, Australia
[3] Univ Birmingham, Sch Comp Sci, Birmingham, England
基金
澳大利亚研究理事会;
关键词
Artificial General Intelligence; artificial intelligence; risk; existential threat; safety; SOCIOTECHNICAL SYSTEMS;
D O I
10.1080/0952813X.2021.1964003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial General intelligence (AGI) offers enormous benefits for humanity, yet it also poses great risk. The aim of this systematic review was to summarise the peer reviewed literature on the risks associated with AGI. The review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Sixteen articles were deemed eligible for inclusion. Article types included in the review were classified as philosophical discussions, applications of modelling techniques, and assessment of current frameworks and processes in relation to AGI. The review identified a range of risks associated with AGI, including AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks. Several limitations of the AGI literature base were also identified, including a limited number of peer reviewed articles and modelling techniques focused on AGI risk, a lack of specific risk research in which domains that AGI may be implemented, a lack of specific definitions of the AGI functionality, and a lack of standardised AGI terminology. Recommendations to address the identified issues with AGI risk research are required to guide AGI design, implementation, and management.
引用
收藏
页码:649 / 663
页数:15
相关论文
共 51 条
[1]   Reviewing peer review [J].
Alberts, Bruce ;
Hanson, Brooks ;
Kelner, Katrina L. .
SCIENCE, 2008, 321 (5885) :15-15
[2]  
[Anonymous], 2007, Artificial general intelligence, DOI [10.1007/978-3-540-68677-4, DOI 10.1007/978-3-540-68677-4]
[3]  
[Anonymous], 2006, GALLERIA
[4]  
[Anonymous], 2014, Superintelligence: Paths, Dangers, Strategies, DOI DOI 10.1080/01402390.2013.844127
[5]   Thinking Inside the Box: Controlling and Using an Oracle AI [J].
Armstrong, Stuart ;
Sandberg, Anders ;
Bostrom, Nick .
MINDS AND MACHINES, 2012, 22 (04) :299-324
[6]   A model of pathways to artificial superintelligence catastrophe for risk and decision analysis [J].
Barrett, Anthony M. ;
Baum, Seth D. .
JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE, 2017, 29 (02) :397-414
[7]  
Baum S., 2017, GLOBAL CATASTROPHIC
[8]  
Baum SD, 2017, INFORM-J COMPUT INFO, V41, P419
[9]   How long until human-level AI? Results from an expert assessment [J].
Baum, Seth D. ;
Goertzel, Ben ;
Goertzel, Ted G. .
TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, 2011, 78 (01) :185-195
[10]  
Bentley P., 2018, Should we fear artificial intelligence, P6