Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2

被引:40
作者
Tabak, Michael A. [1 ,2 ]
Norouzzadeh, Mohammad S. [3 ]
Wolfson, David W. [4 ]
Newton, Erica J. [5 ]
Boughton, Raoul K. [6 ]
Ivan, Jacob S. [7 ]
Odell, Eric A. [7 ]
Newkirk, Eric S. [7 ]
Conrey, Reesa Y. [7 ]
Stenglein, Jennifer [8 ]
Iannarilli, Fabiola [9 ]
Erb, John [10 ]
Brook, Ryan K. [11 ]
Davis, Amy J. [12 ]
Lewis, Jesse [13 ]
Walsh, Daniel P. [14 ]
Beasley, James C. [15 ]
VerCauteren, Kurt C. [16 ]
Clune, Jeff [17 ]
Miller, Ryan S. [18 ]
机构
[1] Quantitat Sci Consulting LLC, Laramie, WY 82072 USA
[2] Univ Wyoming, Dept Zool & Physiol, Laramie, WY 82071 USA
[3] Univ Wyoming, Dept Comp Sci, Laramie, WY 82071 USA
[4] Univ Minnesota, Dept Fisheries Wildlife & Conservat Biol, Minnesota Cooperat Fish & Wildlife Res Unit, St Paul, MN 55108 USA
[5] Ontario Minist Nat Resources & Forestry, Wildlife Res & Monitoring Sect, Peterborough, ON, Canada
[6] Univ Florida, Range Cattle Res & Educ Ctr, Wildlife Ecol & Conservat, Ona, FL USA
[7] Colorado Pk & Wildlife, Ft Collins, CO USA
[8] Wisconsin Dept Nat Resources, Madison, WI USA
[9] Univ Minnesota, Conservat Sci Grad Program, St Paul, MN 55108 USA
[10] Minnesota Dept Nat Resources, Forest Wildlife Populat & Res Grp, Grand Rapids, MN USA
[11] Univ Saskatchewan, Dept Anim & Poultry Sci, Saskatoon, SK, Canada
[12] USDA, Natl Wildlife Res Ctr, Ft Collins, CO USA
[13] Arizona State Univ, Coll Integrat Sci & Arts, Mesa, AZ USA
[14] US Geol Survey, Natl Wildlife Hlth Ctr, Madison, WI USA
[15] Univ Georgia, Savannah River Ecol Lab, Warnell Sch Forestry & Nat Resources, Aiken, SC USA
[16] US Anim & Plant Hlth Inspect Serv, Natl Wildlife Res Ctr, USDA, Ft Collins, CO USA
[17] OpenAI, San Francisco, CA USA
[18] USDA, Ctr Epidemiol & Anim Hlth, Ft Collins, CO USA
来源
ECOLOGY AND EVOLUTION | 2020年 / 10卷 / 19期
关键词
computer vision; deep convolutional neural networks; image classification; machine learning; motion-activated camera; R package; remote sensing; species identification;
D O I
10.1002/ece3.6692
中图分类号
Q14 [生态学(生物生态学)];
学科分类号
071012 ; 0713 ;
摘要
Motion-activated wildlife cameras (or "camera traps") are frequently used to remotely and noninvasively observe animals. The vast number of images collected from camera trap projects has prompted some biologists to employ machine learning algorithms to automatically recognize species in these images, or at least filter-out images that do not contain animals. These approaches are often limited by model transferability, as a model trained to recognize species from one location might not work as well for the same species in different locations. Furthermore, these methods often require advanced computational skills, making them inaccessible to many biologists. We used 3 million camera trap images from 18 studies in 10 states across the United States of America to train two deep neural networks, one that recognizes 58 species, the "species model," and one that determines if an image is empty or if it contains an animal, the "empty-animal model." Our species model and empty-animal model had accuracies of 96.8% and 97.3%, respectively. Furthermore, the models performed well on some out-of-sample datasets, as the species model had 91% accuracy on species from Canada (accuracy range 36%-91% across all out-of-sample datasets) and the empty-animal model achieved an accuracy of 91%-94% on out-of-sample datasets from different continents. Our software addresses some of the limitations of using machine learning to classify images from camera traps. By including many species from several locations, our species model is potentially applicable to many camera trap studies in North America. We also found that our empty-animal model can facilitate removal of images without animals globally. We provide the trained models in an R package (MLWIC2: Machine Learning for Wildlife Image Classification in R), which contains Shiny Applications that allow scientists with minimal programming experience to use trained models and train new models in six neural network architectures with varying depths.
引用
收藏
页码:10374 / 10383
页数:10
相关论文
共 27 条
  • [21] Machine learning to classify animal species in camera trap images: Applications in ecology
    Tabak, Michael A.
    Norouzzadeh, Mohammad S.
    Wolfson, David W.
    Sweeney, Steven J.
    Vercauteren, Kurt C.
    Snow, Nathan P.
    Halseth, Joseph M.
    Di Salvo, Paul A.
    Lewis, Jesse S.
    White, Michael D.
    Teton, Ben
    Beasley, James C.
    Schlichting, Peter E.
    Boughton, Raoul K.
    Wight, Bethany
    Newkirk, Eric S.
    Ivan, Jacob S.
    Odell, Eric A.
    Brook, Ryan K.
    Lukacs, Paul M.
    Moeller, Anna K.
    Mandeville, Elizabeth G.
    Clune, Jeff
    Miller, Ryan S.
    [J]. METHODS IN ECOLOGY AND EVOLUTION, 2019, 10 (04): : 585 - 590
  • [22] Thinking like a naturalist: Enhancing computer vision of citizen science images by harnessing contextual data
    Terry, J. Christopher D.
    Roy, Helen E.
    August, Tom A.
    [J]. METHODS IN ECOLOGY AND EVOLUTION, 2020, 11 (02): : 303 - 315
  • [23] Spatiotemporal hierarchical modelling of species richness and occupancy using camera trap data
    Tobler, Mathias W.
    Hartley, Alfonso Zuniga
    Carrillo-Percastegui, Samia E.
    Powell, George V. N.
    [J]. JOURNAL OF APPLIED ECOLOGY, 2015, 52 (02) : 413 - 421
  • [24] Zilong: A tool to identify empty images in camera-trap data
    Wei, Weideng
    Luo, Gai
    Ran, Jianghong
    Li, Jing
    [J]. ECOLOGICAL INFORMATICS, 2020, 55
  • [25] Identifying animal species in camera trap images using deep learning and citizen science
    Willi, Marco
    Pitman, Ross T.
    Cardoso, Anabelle W.
    Locke, Christina
    Swanson, Alexandra
    Boyer, Amy
    Veldthuis, Marten
    Fortson, Lucy
    [J]. METHODS IN ECOLOGY AND EVOLUTION, 2019, 10 (01): : 80 - 91
  • [26] Yousif H., 2019, IEEE Transactions on Circuits and Systems for Video Technology
  • [27] Animal Detection From Highly Cluttered Natural Scenes Using Spatiotemporal Object Region Proposals and Patch Verification
    Zhang, Zhi
    He, Zhihai
    Cao, Guitao
    Cao, Wenming
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2016, 18 (10) : 2079 - 2092