A model of pathways to artificial superintelligence catastrophe for risk and decision analysis

被引:25
作者
Barrett, Anthony M. [1 ]
Baum, Seth D. [1 ]
机构
[1] Global Catastroph Risk Inst, Washington, DC USA
关键词
Artificial superintelligence (ASI); catastrophe scenario pathway; fault tree; influence diagram;
D O I
10.1080/0952813X.2016.1186228
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An artificial superintelligence (ASI) is an artificial intelligence that is significantly more intelligent than humans in all respects. Whilst ASI does not currently exist, some scholars propose that it could be created sometime in the future, and furthermore that its creation could cause a severe global catastrophe, possibly even resulting in human extinction. Given the high stakes, it is important to analyze ASI risk and factor the risk into decisions related to ASI research and development. This paper presents a graphical model of major pathways to ASI catastrophe, focusing on ASI created via recursive self-improvement. The model uses the established risk and decision analysis modelling paradigms of fault trees and influence diagrams in order to depict combinations of events and conditions that could lead to AI catastrophe, as well as intervention options that could decrease risks. The events and conditions include select aspects of the ASI itself as well as the human process of ASI research, development and management. Model structure is derived from published literature on ASI risk. The model offers a foundation for rigorous quantitative evaluation and decision-making on the long-term risk of ASI catastrophe.
引用
收藏
页码:397 / 414
页数:18
相关论文
共 46 条
[1]  
Alstott J., 2014, J ARTIFICIAL GEN INT, V4, P397
[2]  
[Anonymous], 2008, Technical Report #2008-3
[3]  
[Anonymous], 2004, Coherent Extrapolated Volition
[4]  
[Anonymous], 2012, J EVOLUTION TECHNOLO
[5]  
[Anonymous], 2008, P AAAI FALL S BIOL I
[6]  
[Anonymous], ECAP10 8 EUR C COMP
[7]  
[Anonymous], 2001, PROBABILISTIC RISK A, DOI DOI 10.1017/CBO9780511813597
[8]  
[Anonymous], 2001, Making hard decisions with decision tools
[9]   Thinking Inside the Box: Controlling and Using an Oracle AI [J].
Armstrong, Stuart ;
Sandberg, Anders ;
Bostrom, Nick .
MINDS AND MACHINES, 2012, 22 (04) :299-324
[10]   Analyzing and Reducing the Risks of Inadvertent Nuclear War Between the United States and Russia [J].
Barrett, Anthony M. ;
Baum, Seth D. ;
Hostetler, Kelly .
SCIENCE & GLOBAL SECURITY, 2013, 21 (02) :106-133