An IoT System Using Deep Learning to Classify Camera Trap Images on the Edge

被引:28
作者
Zualkernan, Imran [1 ]
Dhou, Salam [1 ]
Judas, Jacky [2 ]
Sajun, Ali Reza [1 ]
Gomez, Brylle Ryan [1 ]
Hussain, Lana Alhaj [1 ]
机构
[1] Amer Univ Sharjah, Comp Sci & Engn Dept, Sharjah 26666, U Arab Emirates
[2] Emirates Nat WWF, Conservat Unit, Dubai 454891, U Arab Emirates
关键词
deep learning; animal classification; image classification; Internet of Things; image processing; edge computing; animal surveillance; wildlife monitoring;
D O I
10.3390/computers11010013
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to manual coding, the results are often stale by the time they get to the ecologists. Using the Internet of Things (IoT) combined with deep learning represents a good solution for both these problems, as the images can be classified automatically, and the results immediately made available to ecologists. This paper proposes an IoT architecture that uses deep learning on edge devices to convey animal classification results to a mobile app using the LoRaWAN low-power, wide-area network. The primary goal of the proposed approach is to reduce the cost of the wildlife monitoring process for ecologists, and to provide real-time animal sightings data from the camera traps in the field. Camera trap image data consisting of 66,400 images were used to train the InceptionV3, MobileNetV2, ResNet18, EfficientNetB1, DenseNet121, and Xception neural network models. While performance of the trained models was statistically different (Kruskal-Wallis: Accuracy H(5) = 22.34, p < 0.05; F1-score H(5) = 13.82, p = 0.0168), there was only a 3% difference in the F1-score between the worst (MobileNet V2) and the best model (Xception). Moreover, the models made similar errors (Adjusted Rand Index (ARI) > 0.88 and Adjusted Mutual Information (AMU) > 0.82). Subsequently, the best model, Xception (Accuracy = 96.1%; F1-score = 0.87; F1-Score = 0.97 with oversampling), was optimized and deployed on the Raspberry Pi, Google Coral, and Nvidia Jetson edge devices using both TenorFlow Lite and TensorRT frameworks. Optimizing the models to run on edge devices reduced the average macro F1-Score to 0.7, and adversely affected the minority classes, reducing their F1-score to as low as 0.18. Upon stress testing, by processing 1000 images consecutively, Jetson Nano, running a TensorRT model, outperformed others with a latency of 0.276 s/image (s.d. = 0.002) while consuming an average current of 1665.21 mA. Raspberry Pi consumed the least average current (838.99 mA) with a ten times worse latency of 2.83 s/image (s.d. = 0.036). Nano was the only reasonable option as an edge device because it could capture most animals whose maximum speeds were below 80 km/h, including goats, lions, ostriches, etc. While the proposed architecture is viable, unbalanced data remain a challenge and the results can potentially be improved by using object detection to reduce imbalances and by exploring semi-supervised learning.
引用
收藏
页数:24
相关论文
共 50 条
  • [31] Identification of Wild Species in Texas from Camera-trap Images using Deep Neural Network for Conservation Monitoring
    Islam, Sazida B.
    Valles, Damian
    2020 10TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2020, : 537 - 542
  • [32] Open-Set Source Camera Device Identification of Digital Images Using Deep Learning
    Manisha
    Li, Chang-Tsun
    Kotegar, Karunakar A.
    IEEE ACCESS, 2022, 10 (110548-110556): : 110548 - 110556
  • [33] EdgeKE: An On-Demand Deep Learning IoT System for Cognitive Big Data on Industrial Edge Devices
    Fang, Weiwei
    Xue, Feng
    Ding, Yi
    Xiong, Naixue
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (09) : 6144 - 6152
  • [34] Dynamic load balancing of traffic in the IoT edge computing environment using a clustering approach based on deep learning and genetic algorithms
    Merah, Malha
    Aliouat, Zibouda
    Mabed, Hakim
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2025, 28 (02):
  • [35] DeepThings: Distributed Adaptive Deep Learning Inference on Resource-Constrained IoT Edge Clusters
    Zhao, Zhuoran
    Barijough, Kamyar Mirzazad
    Gerstlauer, Andreas
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2018, 37 (11) : 2348 - 2359
  • [36] Intrusion Detection in IoT Using Deep Learning
    Banaamah, Alaa Mohammed
    Ahmad, Iftikhar
    SENSORS, 2022, 22 (21)
  • [37] IoT Device Fingerprint using Deep Learning
    Aneja, Sandhya
    Aneja, Nagender
    Islam, Md Shohidul
    2018 IEEE INTERNATIONAL CONFERENCE ON INTERNET OF THINGS AND INTELLIGENCE SYSTEM (IOTAIS), 2018, : 174 - 179
  • [38] IoT botnet detection using deep learning
    Rabhi, Sana
    Abbes, Tarek
    Zarai, Faouzi
    2023 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC, 2023, : 1107 - 1111
  • [39] An Edge Based Smart Parking Solution Using Camera Networks and Deep Learning
    Bura, Harshitha
    Lin, Nathan
    Kumar, Naveen
    Malekar, Sangram
    Nagaraj, Sushma
    Liu, Kaikai
    2018 IEEE INTERNATIONAL CONFERENCE ON COGNITIVE COMPUTING (ICCC), 2018, : 17 - 24
  • [40] Data augmentation and transfer learning to classify malware images in a deep learning context
    Niccolò Marastoni
    Roberto Giacobazzi
    Mila Dalla Preda
    Journal of Computer Virology and Hacking Techniques, 2021, 17 : 279 - 297