Towards noise robust acoustic insect detection: from the lab to the greenhouse

被引:5
作者
Branding, Jelto [1 ]
von Hoersten, Dieter [1 ]
Wegener, Jens Karl [1 ]
Boeckmann, Elias [2 ]
Hartung, Eberhard [3 ]
机构
[1] Julius Kuhn Inst JKI, Inst Applicat Tech Plant Protect, Messeweg 11-12, D-38104 Braunschweig, Germany
[2] Julius Kuhn Inst JKI, Inst Plant Protect Hort & Urban Green, Messeweg 11-12, D-38104 Braunschweig, Germany
[3] Christian Albrechts Univ Kiel, Inst Agr Proc Engn, Max Eyth Str 6, D-24118 Kiel, Germany
来源
KUNSTLICHE INTELLIGENZ | 2023年 / 37卷 / 2-4期
关键词
Insects; Audio; Acoustic; Identification; Recognition; Classification; Low-noise microphone; Microphone array; Anechoic box; Pest detection; Horticulture; Deep learning; WaveNet; Spectrogram; Raw audio; Neural beamforming; Background noise; Noise simulation; FREQUENCY;
D O I
10.1007/s13218-023-00812-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Successful and efficient pest management is key to sustainable horticultural food production. While greenhouses already allow digital monitoring and control of their climate parameters, a lack of digital pest sensors hinders the advent of digital pest management systems. To close the control loop, digital systems need to be enabled to directly assess the state of different insect populations in a greenhouse. The presented article investigates the feasibility of acoustic sensors for insect detection in greenhouses. The study is based on an extensive dataset of acoustic insect recordings made with an array of high-quality microphones under noise-shielded conditions. By mixing these noise-free laboratory recordings with environmental sounds recorded with the same equipment in a greenhouse, different signal-to-noise ratios (SNR) are simulated. To explore the possibilities of this unique and novel dataset, two deep-learning models are trained on this simulation data. A simple spectrogram-based model represents the baseline for a comparison with a model capable of processing multi-channel raw audio data. Making use of the unique possibility of the dataset, the models are pre-trained on clean data and fine-tuned on noisy data. Under lab conditions, results show that both models can make use of not just insect flight sounds but also the much quieter sounds of insect movements. First attempts under simulated real-world conditions showed the challenging nature of this task and the potential of spatial filtering. The analysis enabled by the proposed methods for training and evaluation provided valuable insights that should be considered for future work.
引用
收藏
页码:157 / 173
页数:17
相关论文
共 30 条
  • [1] Abadi M., 2015, TENSORFLOW LARGE SCA
  • [2] Size and scale effects as constraints in insect sound communication
    Bennet-Clark, HC
    [J]. PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY OF LONDON SERIES B-BIOLOGICAL SCIENCES, 1998, 353 (1367) : 407 - 419
  • [3] BYRNE DN, 1988, J EXP BIOL, V135, P9
  • [4] Flying Insect Classification with Inexpensive Sensors
    Chen, Yanping
    Why, Adena
    Batista, Gustavo
    Mafra-Neto, Agenor
    Keogh, Eamonn
    [J]. JOURNAL OF INSECT BEHAVIOR, 2014, 27 (05) : 657 - 677
  • [5] He, 2019, SPATIAL ATTENTION FA
  • [6] Hoshen Y, 2015, INT CONF ACOUST SPEE, P4624, DOI 10.1109/ICASSP.2015.7178847
  • [7] Kingma DP, 2014, ADV NEUR IN, V27
  • [8] Pre-processing spectrogram parameters improve the accuracy of bioacoustic classification using convolutional neural networks
    Knight, Elly C.
    Hernandez, Sergio Poo
    Bayne, Erin M.
    Bulitko, Vadim
    Tucker, Benjamin, V
    [J]. BIOACOUSTICS-THE INTERNATIONAL JOURNAL OF ANIMAL SOUND AND ITS RECORDING, 2020, 29 (03): : 337 - 355
  • [9] Le-Qing Z, 2011, INSECT SOUND RECOGNI, DOI [10.1109/CMSP.2011.100, DOI 10.1109/CMSP.2011.100]
  • [10] Acoustic Modeling for Google Home
    Li, Bo
    Sainath, Tara N.
    Narayanan, Arun
    Caroselli, Joe
    Bacchiani, Michiel
    Misra, Ananya
    Shafran, Izhak
    Sak, Hasim
    Pundak, Golan
    Chin, Kean
    Sim, Khe Chai
    Weiss, Ron J.
    Wilson, Kevin W.
    Variani, Ehsan
    Kim, Chanwoo
    Siohan, Olivier
    Weintraub, Mitchel
    McDermott, Erik
    Rose, Richard
    Shannon, Matt
    [J]. 18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 399 - 403