Trust of the Generation Z in Artificial Intelligence in the Assessment of Historical Events

被引:0
|
作者
Vinichenko, Mikhail V. [1 ]
Nikiporets-Takigawa, Galina Yu [1 ,2 ]
Oseev, Aleksander A. [3 ]
Rybakova, Marina, V [3 ]
Makushkin, Sergey A. [1 ]
机构
[1] Russian State Social Univ, Moscow, Russia
[2] Univ Cambridge, Old Sch, Trinity Lane, Cambridge, England
[3] Lomonosov Moscow State Univ, Moscow, Russia
关键词
Digitalization; Artificial Intelligence; Generation Z; Historical Events; Trust; EDUCATION; IMPACT; POLICY;
D O I
10.9756/INT-JECSE/V14I1.221040
中图分类号
G76 [特殊教育];
学科分类号
040109 ;
摘要
The article considered the degree of trust of Russian and Slovak students of generation Z (Gen Z) in artificial intelligence (Al) in the assessment of historical events in the conditions of digital society. The basic empirical methods of the study were a sociological survey and focus group conducted in the context of the COVID 19 pandemic remotely using the resources of the online Google Form and cloud conference platform Zoom. The study found that the attitude of Gen Z to Al in the context of digitalization of society is ambiguous and contradictory, which affects the degree of trust in the assessment of historical events. The degree of trust of Russian and Slovak Gen Z students in the general issues of the use of Al, the assessment of historical events by Al generally coincide on fundamental issues and have some contradictions on secondary ones. The analysis of the research data showed that Russian and Slovak Gen Z students have a generally positive attitude to historical information coming from Al. Differences in the degree of trust (not trust) between Russian and Slovak Gen Z students in the presentation of historical information to them by the Al, contradiction in the evaluation of individual historical events were revealed. Gen Z is wary of Al, believing that Al is dangerous for humans and should not be fully trusted in all matters of presentation and assessment of historical events.
引用
收藏
页码:326 / 334
页数:9
相关论文
共 50 条
  • [1] The role of trust in the use of artificial intelligence for chemical risk assessment
    Wassenaar, Pim N. H.
    Minnema, Jordi
    Vriend, Jelle
    Peijnenburg, Willie J. G. M.
    Pennings, Jeroen L. A.
    Kienhuis, Anne
    REGULATORY TOXICOLOGY AND PHARMACOLOGY, 2024, 148
  • [2] Attachment and trust in artificial intelligence
    Gillath, Omri
    Ai, Ting
    Branicky, Michael S.
    Keshmiri, Shawn
    Davison, Robert B.
    Spaulding, Ryan
    COMPUTERS IN HUMAN BEHAVIOR, 2021, 115
  • [3] Artificial intelligence acceptance in services: connecting with Generation Z
    Vitezic, Vanja
    Peric, Marko
    SERVICE INDUSTRIES JOURNAL, 2021, 41 (13-14) : 926 - 946
  • [4] Trust and Success of Artificial Intelligence in Medicine
    Miklavcic, Jonas
    BOGOSLOVNI VESTNIK-THEOLOGICAL QUARTERLY-EPHEMERIDES THEOLOGICAE, 2021, 81 (04): : 935 - 946
  • [5] Can We Trust Artificial Intelligence?
    Christian Budnik
    Philosophy & Technology, 2025, 38 (1)
  • [6] Trust in Artificial Intelligence in Radiotherapy: A Survey
    Heising, Luca M.
    Ou, Carol X. J.
    Verhaegen, Frank
    Wolfs, Cecile J. A.
    Hoebers, Frank
    Jacobs, Maria J. G.
    RADIOTHERAPY AND ONCOLOGY, 2024, 194 : S2857 - S2860
  • [7] SHOULD WE TRUST ARTIFICIAL INTELLIGENCE?
    Sutrop, Margit
    TRAMES-JOURNAL OF THE HUMANITIES AND SOCIAL SCIENCES, 2019, 23 (04): : 499 - 522
  • [8] Transparency and trust in artificial intelligence systems
    Schmidt, Philipp
    Biessmann, Felix
    Teubner, Timm
    JOURNAL OF DECISION SYSTEMS, 2020, 29 (04) : 260 - 278
  • [9] An Exploratory Study of Artificial Intelligence Technology in Game Design for a New Interpretation of Historical Events
    Liu, Yanlin
    Zhao, Tingwei
    Peng, Yue
    Liu, Kaihua
    DISTRIBUTED, AMBIENT AND PERVASIVE INTERACTIONS, PT I, DAPI 2024, 2024, 14718 : 93 - 109
  • [10] The role of institutional and self in the formation of trust in artificial intelligence technologies
    Wong, Lai-Wan
    Tan, Garry Wei-Han
    Ooi, Keng-Boon
    Dwivedi, Yogesh
    INTERNET RESEARCH, 2024, 34 (02) : 343 - 370