How widespread use of generative AI for images and video can affect the environment and the science of ecology

被引:6
作者
Rillig, Matthias C. [1 ,2 ]
Mansour, India [1 ,2 ]
Hempel, Stefan [1 ,2 ]
Bi, Mohan [1 ,2 ]
Koenig-Ries, Birgitta [3 ]
Kasirzadeh, Atoosa [4 ,5 ]
机构
[1] Free Univ Berlin, Inst Biol, Berlin, Germany
[2] Berlin Brandenburg Inst Adv Biodivers Res BBIB, Berlin, Germany
[3] Friedrich Schiller Univ Jena, Inst Informat, Heinz Nixdorf Chair Distributed Informat Syst, Jena, Germany
[4] Univ Edinburgh, Edinburgh, Midlothian, Scotland
[5] Alan Turing Inst, London, England
关键词
artificial intelligence; environment; fraud; generative AI; science credibility;
D O I
10.1111/ele.14397
中图分类号
Q14 [生态学(生物生态学)];
学科分类号
071012 ; 0713 ;
摘要
Generative artificial intelligence (AI) models will have broad impacts on society including the scientific enterprise; ecology and environmental science will be no exception. Here, we discuss the potential opportunities and risks of advanced generative AI for visual material (images and video) for the science of ecology and the environment itself. There are clearly opportunities for positive impacts, related to improved communication, for example; we also see possibilities for ecological research to benefit from generative AI (e.g., image gap filling, biodiversity surveys, and improved citizen science). However, there are also risks, threatening to undermine the credibility of our science, mostly related to actions of bad actors, for example in terms of spreading fake information or committing fraud. Risks need to be mitigated at the level of government regulatory measures, but we also highlight what can be done right now, including discussing issues with the next generation of ecologists and transforming towards radically open science workflows.
引用
收藏
页数:5
相关论文
共 19 条
[1]   On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [J].
Bender, Emily M. ;
Gebru, Timnit ;
McMillan-Major, Angelina ;
Shmitchell, Shmargaret .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :610-623
[2]   The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications [J].
Bik, Elisabeth M. ;
Casadevall, Arturo ;
Fang, Ferric C. .
MBIO, 2016, 7 (03)
[3]   Typology of Risks of Generative Text-to-Image Models [J].
Bird, Charlotte ;
Ungless, Eddie L. ;
Kasirzadeh, Atoosa .
PROCEEDINGS OF THE 2023 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2023, 2023, :396-410
[4]   Science in the age of large language models [J].
Birhane, Abeba ;
Kasirzadeh, Atoosa ;
Leslie, David ;
Wachter, Sandra .
NATURE REVIEWS PHYSICS, 2023, 5 (05) :277-280
[5]  
Future of Life Institute, 2023, PAUS GIANT AI EXP OP
[6]   AI-enabled image fraud in scientific publications [J].
Gu, Jinjin ;
Wang, Xinlei ;
Li, Chenang ;
Zhao, Junhua ;
Fu, Weijin ;
Liang, Gaoqi ;
Qiu, Jing .
PATTERNS, 2022, 3 (07)
[7]   Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review [J].
Kebaili, Aghiles ;
Lapuyade-Lahorgue, Jerome ;
Ruan, Su .
JOURNAL OF IMAGING, 2023, 9 (04)
[8]   Using Artificial Intelligence to Visualize the Impacts of Climate Change [J].
Luccioni, Alexandra ;
Schmidt, Victor ;
Vardanyan, Vahe ;
Bengio, Yoshua .
IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2021, 41 (01) :8-14
[9]  
Luccioni AS, 2023, J MACH LEARN RES, V24
[10]   Into the Digital Wild: Utilizing Twitter, Instagram, YouTube, and Facebook for Effective Science and Environmental Communication [J].
Pavelle, Sophie ;
Wilkinson, Clare .
FRONTIERS IN COMMUNICATION, 2020, 5