An approach to rapid processing of camera trap images with minimal human input

被引:9
作者
Duggan, Matthew T. [1 ]
Groleau, Melissa F. [1 ]
Shealy, Ethan P. [1 ]
Self, Lillian S. [1 ]
Utter, Taylor E. [1 ]
Waller, Matthew M. [1 ]
Hall, Bryan C. [2 ]
Stone, Chris G. [2 ]
Anderson, Layne L. [2 ]
Mousseau, Timothy A. [1 ]
机构
[1] Univ South Carolina UofSC, Dept Biol Sci, Columbia, SC 29208 USA
[2] South Carolina Army Natl Guard Environm Off, Eastover, SC USA
关键词
camera trap; deep learning; neural network; transfer learning; wildlife ecology; NETWORKS;
D O I
10.1002/ece3.7970
中图分类号
Q14 [生态学(生物生态学)];
学科分类号
071012 ; 0713 ;
摘要
Camera traps have become an extensively utilized tool in ecological research, but the manual processing of images created by a network of camera traps rapidly becomes an overwhelming task, even for small camera trap studies. We used transfer learning to create convolutional neural network (CNN) models for identification and classification. By utilizing a small dataset with an average of 275 labeled images per species class, the model was able to distinguish between species and remove false triggers. We trained the model to detect 17 object classes with individual species identification, reaching an accuracy up to 92% and an average F1 score of 85%. Previous studies have suggested the need for thousands of images of each object class to reach results comparable to those achieved by human observers; however, we show that such accuracy can be achieved with fewer images. With transfer learning and an ongoing camera trap study, a deep learning model can be successfully created by a small camera trap study. A generalizable model produced from an unbalanced class set can be utilized to extract trap events that can later be confirmed by human processors.
引用
收藏
页码:12051 / 12063
页数:13
相关论文
共 34 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]   A granular view of a snow leopard population using camera traps in Central China [J].
Alexander, Justine S. ;
Zhang, Chengcheng ;
Shi, Kun ;
Riordan, Philip .
BIOLOGICAL CONSERVATION, 2016, 197 :27-31
[3]   Raccoon Vigilance and Activity Patterns When Sympatric with Coyotes [J].
Chitwood, M. Colter ;
Lashley, Marcus A. ;
Higdon, Summer D. ;
DePerno, Christopher S. ;
Moorman, Christopher E. .
DIVERSITY-BASEL, 2020, 12 (09)
[4]   BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation [J].
Dai, Jifeng ;
He, Kaiming ;
Sun, Jian .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1635-1643
[5]   Brain tumor classification using deep CNN features via transfer learning [J].
Deepak, S. ;
Ameer, P. M. .
COMPUTERS IN BIOLOGY AND MEDICINE, 2019, 111
[6]   An oasis in the desert: The potential of water sources as camera trap sites in arid environments for surveying a carnivore guild [J].
Edwards, Sarah ;
Gange, Alan C. ;
Wiesel, Ingrid .
JOURNAL OF ARID ENVIRONMENTS, 2016, 124 :304-309
[7]   Bait effectiveness in camera trap studies in the Iberian Peninsula [J].
Ferreira-Rodriguez, Noe ;
Pombal, Manuel A. .
MAMMAL RESEARCH, 2019, 64 (02) :155-164
[8]  
Fink G. A., 2019, Pattern recognition: 41st DAGM German conference, P394
[9]   Camera-trapping version 3.0: current constraints and future priorities for development [J].
Glover-Kapfer, Paul ;
Soto-Navarro, Carolina A. ;
Wearn, Oliver R. .
REMOTE SENSING IN ECOLOGY AND CONSERVATION, 2019, 5 (03) :209-223
[10]  
Gomez Alexander, 2016, Advances in Visual Computing. 12th International Symposium, ISVC 2016. Proceedings: LNCS 10072, P747, DOI 10.1007/978-3-319-50835-1_67