In support of “no-fault” civil liability rules for artificial intelligence

被引:4
作者
Emiliano Marchisio
机构
[1] Law Department, “Giustino Fortunato” University, Benevento
来源
SN Social Sciences | / 1卷 / 2期
关键词
Artificial intelligence; Civil liability; No-fault; Robots; Self driving cars; Tort law;
D O I
10.1007/s43545-020-00043-z
中图分类号
学科分类号
摘要
Civil liability is traditionally understood as indirect market regulation, since the risk of incurring liability for damages gives incentives to invest in safety. Such an approach, however, is inappropriate in the markets of artificial intelligence devices. In fact, according to the current paradigm of civil liability, compensation is allowed only to the extent that “someone” is identified as a debtor. However, in many cases it would not be useful to impose the obligation to pay such compensation to producers and programmers: the algorithms, in fact, can “behave” far independently from the instructions initially provided by programmers so that they can err despite no flaw in design or implementation. Therefore, application of “traditional” civil liability to AI may represent a disincentive to new technologies based on artificial intelligence. This is why I think artificial intelligence requires that the law evolves, on this matter, from an issue of civil liability into one of financial management of losses. No-fault redress schemes could be an interesting and worthy regulatory strategy in order to enable this evolution. Of course, such schemes should apply only in cases where there is no evidence that producers and programmers have acted under conditions of negligence, imprudence or unskillfulness and their activity is adequately compliant with scientifically validated standards. © The Author(s), under exclusive licence to Springer Nature Switzerland AG part of Springer Nature 2021.
引用
收藏
相关论文
共 142 条
[71]  
Kaplan A., Haenlein M., Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence, Bus Horiz, 62, pp. 15-25, (2019)
[72]  
Karnov E.A., The application of traditional tort theory to embodied machine intelligence, Robot Law, Cheltenham, Edward Elgar, pp. 51-77, (2016)
[73]  
Kizer K.W., Blum L.N., Safe practices for better health care, Agency for Healthcare Research and Quality (US), Rockville (MD), Advances in Patient Safety: From Research to Implementation, Vol IV, Programs, Tools, and Products., (2005)
[74]  
Koops B.-J., Hildebrandt M., Jaquet D.-O., Bridging the accountability gap: rights for new entities in the information society?, Minn J L Sci Tech, 11, 2, pp. 497-561, (2010)
[75]  
Koza J.R., Bennett F.H., Andre D., Keane M.A., Automated design of both the topology and sizing of analog electrical circuits using genetic programming, ArtifIntell Des, 96, pp. 151-170, (1996)
[76]  
Kurzweil R., The singularity is near, (2005)
[77]  
Leenes R., Palmerini E., Koops B.-J., Bertolini A., Salvini P., Lucivero F., Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues, Law InnovTechnol, 9, 1, pp. 1-44, (2017)
[78]  
Lessig L., The law of the horse: what cyberlaw might teach, Harv Law Rev, 113, pp. 501-549, (1999)
[79]  
Lindley D.V., Understanding uncertainty, (2006)
[80]  
Litjens G., Kooi T., Bejnordi B.E., Setio A.A.A., Ciompi F., Ghafoorian M., van der Laak J.A.W.M., van Ginneken B., Sanchez C.I., A survey on deep learning in medical image analysis, Med Image Anal, 42, pp. 60-88, (2017)