Toward an Ethics of AI Assistants: an Initial Framework

被引:1
作者
Danaher J. [1 ]
机构
[1] School of Law, NUI Galway, University Road, Galway
关键词
Artificial intelligence; Autonomy; Cognitive outsourcing; Degeneration; Embodied cognition; Interpersonal communications;
D O I
10.1007/s13347-018-0317-3
中图分类号
学科分类号
摘要
Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines. © 2018, Springer Science+Business Media B.V., part of Springer Nature.
引用
收藏
页码:629 / 653
页数:24
相关论文
共 50 条
[1]  
Burgos D., van Nimwegen C., van Oostendorp H., Koper R., Game-based learning and immediate feedback. The case study of the Planning Educational Task, International Journal of Advanced Technology in Learning, (2007)
[2]  
Burrell J., How the machine thinks: Understanding opacity in machine learning systems, Big Data and Society, (2016)
[3]  
Carr N., The glass cage: Where automation is taking us, (2014)
[4]  
Crawford M., The world beyond your head, (2015)
[5]  
Danaher J., The threat of algocracy: Reality, resistance and accommodation, Philosophy and Technology, 29, 3, pp. 245-268, (2016)
[6]  
Danaher J., Why internal moral enhancement might be politically better than external moral enhancement, Neuroethics, (2016)
[7]  
Dworkin G., The theory and practice of autonomy, (1988)
[8]  
Frankfurt H., Freedom of the will and the concept of a person, Journal of Philosophy, 68, pp. 5-20, (1971)
[9]  
Frischmann B., Human-focused Turing tests: A framework for judging nudging and the techno-social engineering of humans, Cardozo Legal Studies Research Paper No. 441 - Available At, (2014)
[10]  
Giublini A., Savulescu J., The Artificial Moral Advisor. The 'Ideal Observer' meets Artificial Intelligence, Philosophy and Technology, 31, 2, pp. 169-188, (2018)