What Should a Robot Do? Comparing Human and Large Language Model Recommendations for Robot Deception

被引:0
作者
Rogers, Kantwon [1 ]
Webber, Reiden John Allen [1 ]
Zubizarreta, Geronimo Gorostiaga [1 ]
Cruz, Arthur Melo [2 ]
Chen, Shengkang [1 ]
Arkin, Ronald C. [1 ]
Borenstein, Jason [1 ]
Wagner, Alan R. [2 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Penn State Univ, State Coll, PA 16802 USA
来源
COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION | 2024年
基金
美国国家科学基金会;
关键词
deception; ethical dilemmas; LLM; human-robot interaction;
D O I
10.1145/3610978.3640752
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study compares human ethical judgments with Large Language Models (LLMs) on robotic deception in various scenarios. Surveying human participants and querying LLMs, we presented ethical dilemmas in high-risk and low-risk contexts. Findings reveal alignment between humans and LLMs in high-risk scenarios, prioritizing safety, but notable divergences in low-risk situations, refecting challenges in AI development to accurately capture human social nuances and moral expectations.
引用
收藏
页码:906 / 910
页数:5
相关论文
共 28 条
  • [1] Abdulhai M, 2023, Arxiv, DOI arXiv:2310.15337
  • [2] Achiam OJ, 2023, Arxiv, DOI [arXiv:2303.08774, 10.48550/arXiv.2303.08774, DOI 10.48550/ARXIV.2303.08774]
  • [3] Borenstein Jason, 2023, INT C COMP ETH, V1
  • [4] Chen SK, 2022, Arxiv, DOI arXiv:2206.10727
  • [5] Constantinescu M., 2022, Philosophy Technology, V35, P35, DOI [10.1007/s13347-022-00529-z, DOI 10.1007/S13347-022-00529-Z]
  • [6] Ettinger D, 2009, Towards a theory of deception, V181
  • [7] Evans O., 2021, arXiv
  • [8] Fitz S, 2023, Arxiv, DOI arXiv:2309.09397
  • [9] Hundt Andrew, 2022, FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, P743, DOI 10.1145/3531146.3533138
  • [10] Isaac A. M. C., 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, P157, DOI DOI 10.1093/OSO/9780190652951.003.0011