共 103 条
- [1] on artificial intelligence - A European approach to excellence and trust,, European Commission
- [2] Lee J.D., See K.A., Trust in automation: Designing for appropriate reliance, Hum. Factors J. Hum. Factors Ergonom Soc., 46, NO. 1, pp. 50-80, (2004)
- [3] Meyerson D., Weick K.E., Kramer R.M., Tyler T., Swift trust and temporary groups, Trust Organizations: Frontiers of Theory and Research, pp. 166-195, (1996)
- [4] Hancock P.A., Billings D.R., Schaefer K.E., Chen J.Y., De Visser E.J., Parasuraman R., A meta-analysis of factors affecting trust in human-robot interaction, Hum. Factors, 53, NO. 5, pp. 517-527, (2011)
- [5] Saetra H.S., Social robot deception and the culture of trust, Paladyn J. Behav. Robot., 12, NO. 1, pp. 276-286, (2021)
- [6] Levine E.E., Schweitzer M.E., Prosocial lies: When deception breeds trust, Organ. Behav. Hum. Decis. Process., 126, pp. 88-106, (2015)
- [7] Robinson S.C., Trust, transparency, and openness: How inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI), Technol. Soc., 63, (2020)
- [8] Felzmann H., Fosch-Villaronga E., Lutz C., Tamo-Larrieux A., Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns, Big Data Soc., 6, NO. 1, pp. 1-14, (2019)
- [9] Parasuraman R., Riley V., Humans and automation: Use, misuse, disuse, abuse, Hum. Factors, 39, NO. 2, pp. 230-253, (1997)
- [10] "ethics Guidelines for Trustworthy AI," European Commission, (2020)