Striking the Balance in Using LLMs for Fact-Checking: A Narrative Literature Review

被引:2
作者
Dierickx, Laurence [1 ,2 ]
van Dalen, Arjen [2 ]
Opdahl, Andreas L. [1 ]
Linden, Carl-Gustav [1 ]
机构
[1] Univ Bergen, Dept Informat Sci & Media Studies, Bergen, Norway
[2] Univ Southern Denmark, Digital Democracy Ctr, Odense, Denmark
来源
DISINFORMATION IN OPEN ONLINE MEDIA, MISDOOM 2024 | 2024年 / 15175卷
关键词
Fact-checking; Large Language Models; Risk mitigation; JOURNALISM;
D O I
10.1007/978-3-031-71210-4_1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The launch of ChatGPT at the end of November 2022 triggered a general reflection on its benefits for supporting fact-checking workflows and practices. Between the excitement of the availability of AI systems that no longer require the mastery of programming skills and the exploration of a new field of experimentation, academics and professionals foresaw the benefits of such technology. Critics have raised concerns about the fairness and of the data used to train Large Language Models (LLMs), including the risk of artificial hallucinations and the proliferation of machine-generated content that could spread misinformation. As LLMs pose ethical challenges, how can professional fact-checking mitigate risks? This narrative literature review explores the current state of LLMs in the context of fact-checking practice, highlighting three key complementary mitigation strategies related to education, ethics and professional practice.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 100 条
  • [21] "So what if ChatGPT wrote it?" Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
    Dwivedi, Yogesh K.
    Kshetri, Nir
    Hughes, Laurie
    Slade, Emma Louise
    Jeyaraj, Anand
    Kar, Arpan Kumar
    Baabdullah, Abdullah M.
    Koohang, Alex
    Raghavan, Vishnupriya
    Ahuja, Manju
    Albanna, Hanaa
    Albashrawi, Mousa Ahmad
    Al-Busaidi, Adil S.
    Balakrishnan, Janarthanan
    Barlette, Yves
    Basu, Sriparna
    Bose, Indranil
    Brooks, Laurence
    Buhalis, Dimitrios
    Carter, Lemuria
    Chowdhury, Soumyadeb
    Crick, Tom
    Cunningham, Scott W.
    Davies, Gareth H.
    Davison, Robert M.
    De, Rahul
    Dennehy, Denis
    Duan, Yanqing
    Dubey, Rameshwar
    Dwivedi, Rohita
    Edwards, John S.
    Flavian, Carlos
    Gauld, Robin
    Grover, Varun
    Hu, Mei-Chih
    Janssen, Marijn
    Jones, Paul
    Junglas, Iris
    Khorana, Sangeeta
    Kraus, Sascha
    Larsen, Kai R.
    Latreille, Paul
    Laumer, Sven
    Malik, F. Tegwen
    Mardani, Abbas
    Mariani, Marcello
    Mithas, Sunil
    Mogaji, Emmanuel
    Nord, Jeretta Horn
    O'Connor, Siobhan
    [J]. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT, 2023, 71
  • [22] Verification of Digital Sources in Swedish Newsrooms - A Technical Issue or a Question of Newsroom Culture?
    Edwardsson, Malin Picha
    Al-Saqaf, Walid
    Nygren, Gunnar
    [J]. JOURNALISM PRACTICE, 2023, 17 (08) : 1678 - 1695
  • [23] Feldman P, 2023, Arxiv, DOI arXiv:2306.06085
  • [24] Ferrari R., 2015, REV GEN PSYCHOL, V24, P230, DOI [DOI 10.1179/2047480615Z.000000000329, DOI 10.1037/1089-2680.1.3.311, DOI 10.1037//1089-2680.1.3.311, 10.1037/1089-2680.1.3.311]
  • [25] Ferrario Andrea, 2022, FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, P1457, DOI 10.1145/3531146.3533202
  • [26] Revisiting the political biases of ChatGPT
    Fujimoto, Sasuke
    Takemoto, Kazuhiro
    [J]. FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [27] Graves L., 2016, Digital News Project Report
  • [28] Graves L., 2019, Oxford Research Encyclopedia of Communication, DOI [DOI 10.1093/ACREFORE/9780190228613.013.808, 10.1093/acrefore/9780190228613.013.808]
  • [29] Gudivada V., 2017, International Journal on Advances in Software, V10, P1
  • [30] A Survey on Automated Fact-Checking
    Guo, Zhijiang
    Schlichtkrull, Michael
    Vlachos, Andreas
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 178 - 206