The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence

被引:54
|
作者
Brynjolfsson, Erik [1 ,2 ,3 ]
机构
[1] Stanford Univ, Digital Econ Lab, Inst Human Cent AI, Stanford, CA 94305 USA
[2] Stanford Univ, Grad Sch Business, Stanford, CA 94305 USA
[3] Stanford Univ, Dept Econ, Stanford, CA 94305 USA
关键词
TECHNOLOGY; TAXATION;
D O I
10.1162/daed_a_01915
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
In 1950, Alan Turing proposed a test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions were indistinguishable from a human's? Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly a better understanding of our own minds. But not all types of AI are human-like-in fact, many of the most powerful systems are very different from humans-and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, humans retain the power to insist on a share of the value created. What is more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policy-makers.
引用
收藏
页码:272 / 287
页数:16
相关论文
共 50 条