Two arguments against human-friendly AI

被引:0
|
作者
Ken Daley
机构
[1] Southern Methodist University,Philosophy Department
来源
AI and Ethics | 2021年 / 1卷 / 4期
关键词
Artificial intelligence; Artificial general intelligence; Superintelligence; Existential risk; Control problem; Impartiality; Friendly AI;
D O I
10.1007/s43681-021-00051-6
中图分类号
学科分类号
摘要
The past few decades have seen a substantial increase in the focus on the myriad ethical implications of artificial intelligence. Included amongst the numerous issues is the existential risk that some believe could arise from the development of artificial general intelligence (AGI) which is an as-of-yet hypothetical form of AI that is able to perform all the same intellectual feats as humans. This has led to extensive research into how humans can avoid losing control of an AI that is at least as intelligent as the best of us. This ‘control problem’ has given rise to research into the development of ‘friendly AI’ which is a highly competent AGI that will benefit, or at the very least, not be hostile toward humans. Though my question is focused upon AI, ethics and issues surrounding the value of friendliness, I want to question the pursuit of human-friendly AI (hereafter FAI). In other words, we might ask whether worries regarding harm to humans are sufficient reason to develop FAI rather than impartially ethical AGI, or an AGI designed to take the interests of all moral patients—both human and non-human—into consideration. I argue that, given that we are capable of developing AGI, it ought to be developed with impartial, species-neutral values rather than those prioritizing friendliness to humans above all else.
引用
收藏
页码:435 / 444
页数:9
相关论文
共 50 条
  • [1] How to Grow a Robot: Developing Human-Friendly, Social AI
    Randall, Ian
    PHYSICS WORLD, 2020, 33 (09) : 46 - 47
  • [2] Human-friendly soft actuator
    Noritsugu, T
    INTERNATIONAL JOURNAL OF THE JAPAN SOCIETY FOR PRECISION ENGINEERING, 1997, 31 (02): : 92 - 96
  • [3] Architecture of the human-friendly robot 'Marvel'
    Egi, M
    Kawano, J
    Shimamura, J
    ADVANCED ROBOTICS, 1999, 13 (03) : 227 - 228
  • [4] Scalable human-friendly resource names
    Ballintijn, G
    van Steen, M
    Tanenbaum, AS
    IEEE INTERNET COMPUTING, 2001, 5 (05) : 20 - 27
  • [5] Building human-friendly robot systems
    Heinzmann, J
    Zelinsky, A
    ROBOTICS RESEARCH, 2000, : 305 - 312
  • [6] Human-friendly organic integrated circuits
    Sekitani, Tsuyoshi
    Someya, Takao
    MATERIALS TODAY, 2011, 14 (09) : 398 - 407
  • [7] Human-friendly interaction for learning and cooperation
    Kristensen, S
    Horstmann, S
    Klandt, J
    Lohnert, F
    Stopp, A
    2001 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2001, : 2590 - 2595
  • [8] Safety Analysis for a Human-Friendly Manipulator
    Haddadin, Sami
    Albu-Schaeffer, Alin
    Hirzinger, Gerd
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2010, 2 (03) : 235 - 252
  • [9] Focused section on human-friendly mechatronics
    Kobayashi, H
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 1997, 2 (04) : 217 - 217
  • [10] Safety Analysis for a Human-Friendly Manipulator
    Sami Haddadin
    Alin Albu-Schäffer
    Gerd Hirzinger
    International Journal of Social Robotics, 2010, 2 : 235 - 252