Military robots should not look like a humans

被引:0
作者
Kamil Mamak
Kaja Kowalczewska
机构
[1] University of Helsinki,RADAR: Robophilosophy, AI Ethics and Datafication Research Group
[2] Jagiellonian University,Department of Criminal Law, Faculty of Law and Administration
[3] University of Wrocław,Digital Justice Center
来源
Ethics and Information Technology | 2023年 / 25卷
关键词
Military robots; Design; Human–robot interactions; Antophomorpism;
D O I
暂无
中图分类号
学科分类号
摘要
Using robots in the military contexts is problematic at many levels. There are social, legal, and ethical issues that should be discussed first before their wider deployment. In this paper, we focus on an additional problem: their human likeness. We claim that military robots should not look like humans. That design choice may bring additional risks that endanger human lives and by that contradicts the very justification for deploying robots at war, which is decreasing human deaths and injuries. We discuss two threats—epistemological and patient. Epistemological one is connected with the risk of mistaking robots for humans due to the limited ways of getting information about the external world, which may be amplified by the rush and need to fight with robots in distance. The patient threat is related to the developing attachment to robots, that in military contexts may cause additional deaths by the hesitance to sacrifice robots in order to save humans in peril or risking human life to save robots.
引用
收藏
相关论文
共 97 条
  • [1] Arkin RC(2012)Moral decision making in autonomous systems: Enforcement, moral emotions, dignity, trust, and deception Proceedings of the IEEE 100 571-589
  • [2] Ulam P(2016)‘Hands up, Don’t Shoot!’: HRI and the automation of police use of force Journal of Human-Robot Interaction 5 55-69
  • [3] Wagner AR(2018)Robot companions: A legal and ethical analysis The Information Society 34 130-140
  • [4] Asaro P(2023)Prospects for the global governance of autonomous weapons: Comparing Chinese, Russian, and US Practices Ethics and Information Technology 25 5-150
  • [5] Bertolini A(2023)State responsibility in relation to military applications of artificial intelligence Leiden Journal of International Law 36 133-26
  • [6] Aiello G(2018)Patiency is not a virtue: The design of intelligent systems and systems of ethics Ethics and Information Technology 20 15-158
  • [7] Bode I(2023)Who is controlling whom? Reframing ‘Meaningful Human Control’ of AI systems in security Ethics and Information Technology 25 10-24
  • [8] Huelss H(2018)Why care about robots? Empathy, moral standing, and the language of suffering Kairos. Journal of Philosophy & Science 20 141-657
  • [9] Nadibaidze A(2019)The philosophical case for robot friendship Journal of Posthuman Studies 3 5-34
  • [10] Qiao-Franco G(2019)Welcoming robots into the moral circle: A defence of ethical behaviourism Science and Engineering Ethics, June. 93 13-382