Can we program or train robots to be good?

被引:21
作者
Sharkey, Amanda [1 ]
机构
[1] Univ Sheffield, Dept Comp Sci, Sheffield S1 4DP, S Yorkshire, England
关键词
Ethics; Moral competence; Robot; Decision-making; Minimally ethical; ETHICS; MORALITY; CARE;
D O I
10.1007/s10676-017-9425-5
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as 'ethical', or 'minimally ethical' are considered, although they are found to operate only in quite constrained and limited application domains. There is a general recognition that current robots cannot be described as full moral agents, but it is less clear whether will always be the case. Concerns are raised about the insufficiently justified use of terms such as 'moral' and 'ethical' to describe the behaviours of robots that are often more related to safety considerations than to moral ones. Given the current state of the art, two possible responses are identified. The first involves continued efforts to develop robots that are capable of ethical behaviour. The second is to argue against, and to attempt to avoid, placing robots in situations that demand moral competence and an understanding of the surrounding social situation. There is something to be gained from both responses, but it is argued here that the second is the more responsible choice.
引用
收藏
页码:283 / 295
页数:13
相关论文
共 51 条
[11]  
Asimov I, 1942, ASTOUNDING SCI FICTI, V29, P94103
[12]  
Beauchamp TL., 2001, PRINCIPLES BIOMEDICA
[13]  
Bostrom N., 2014, Superintelligence: Paths, Dangers, Strategies, DOI DOI 10.1080/01402390.2013.844127
[14]  
Carr Nicholas., 2015, GLASS CAGE AUTOMATIO
[15]  
Churchland PS, 2011, BRAINTRUST: WHAT NEUROSCIENCE TELLS US ABOUT MORALITY, P1
[16]   Health Care, Capabilities, and AI Assistive Technologies [J].
Coeckelbergh, Mark .
ETHICAL THEORY AND MORAL PRACTICE, 2010, 13 (02) :181-190
[17]  
Docherty Bonnie, 2016, CONVERSATION
[18]   On the morality of artificial agents [J].
Floridi, L ;
Sanders, JW .
MINDS AND MACHINES, 2004, 14 (03) :349-379
[19]   Artificial moral agents are infeasible with foreseeable technologies [J].
Hew, Patrick Chisan .
ETHICS AND INFORMATION TECHNOLOGY, 2014, 16 (03) :197-206
[20]  
[Heyns Christof UN Human Rights Council UN Human Rights Council], Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, P20