Marker-Less Motion Capture of Insect Locomotion With Deep Neural Networks Pre-trained on Synthetic Videos

被引:3
|
作者
Arent, Ilja [1 ]
Schmidt, Florian P. [1 ,2 ]
Botsch, Mario [2 ,3 ]
Duerr, Volker [1 ,2 ]
机构
[1] Bielefeld Univ, Biol Cybernet, Fac Biol, Bielefeld, Germany
[2] Bielefeld Univ, Ctr Cognit Interact Technol, Bielefeld, Germany
[3] TU Dortmund Univ, Comp Graph, Dortmund, Germany
来源
关键词
insect locomotion; machine learning; behavioral analysis; marker-less motion capture; deep neural network; motion tracking; MOVEMENT; WALKING; BODY;
D O I
10.3389/fnbeh.2021.637806
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Motion capture of unrestrained moving animals is a major analytic tool in neuroethology and behavioral physiology. At present, several motion capture methodologies have been developed, all of which have particular limitations regarding experimental application. Whereas marker-based motion capture systems are very robust and easily adjusted to suit different setups, tracked species, or body parts, they cannot be applied in experimental situations where markers obstruct the natural behavior (e.g., when tracking delicate, elastic, and/or sensitive body structures). On the other hand, marker-less motion capture systems typically require setup- and animal-specific adjustments, for example by means of tailored image processing, decision heuristics, and/or machine learning of specific sample data. Among the latter, deep-learning approaches have become very popular because of their applicability to virtually any sample of video data. Nevertheless, concise evaluation of their training requirements has rarely been done, particularly with regard to the transfer of trained networks from one application to another. To address this issue, the present study uses insect locomotion as a showcase example for systematic evaluation of variation and augmentation of the training data. For that, we use artificially generated video sequences with known combinations of observed, real animal postures and randomized body position, orientation, and size. Moreover, we evaluate the generalization ability of networks that have been pre-trained on synthetic videos to video recordings of real walking insects, and estimate the benefit in terms of reduced requirement for manual annotation. We show that tracking performance is affected only little by scaling factors ranging from 0.5 to 1.5. As expected from convolutional networks, the translation of the animal has no effect. On the other hand, we show that sufficient variation of rotation in the training data is essential for performance, and make concise suggestions about how much variation is required. Our results on transfer from synthetic to real videos show that pre-training reduces the amount of necessary manual annotation by about 50%.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Teaming Up Pre-Trained Deep Neural Networks
    Deabes, Wael
    Abdel-Hakim, Alaa E.
    2018 INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INFORMATION SECURITY (ICSPIS), 2018, : 73 - 76
  • [2] Classification of Deepfake Videos Using Pre-trained Convolutional Neural Networks
    Masood, MomMa
    Nawaz, Marriam
    Javed, Ali
    Nazir, Tahira
    Mehmood, Awais
    Mahum, Rabbia
    2021 INTERNATIONAL CONFERENCE ON DIGITAL FUTURES AND TRANSFORMATIVE TECHNOLOGIES (ICODT2), 2021,
  • [3] Detecting Deceptive Utterances Using Deep Pre-Trained Neural Networks
    Wawer, Aleksander
    Sarzynska-Wawer, Justyna
    APPLIED SCIENCES-BASEL, 2022, 12 (12):
  • [4] Medical Image Classification: A Comparison of Deep Pre-trained Neural Networks
    Alebiosu, David Olayemi
    Muhammad, Fermi Pasha
    2019 17TH IEEE STUDENT CONFERENCE ON RESEARCH AND DEVELOPMENT (SCORED), 2019, : 306 - 310
  • [5] Semantic Segmentation of Mammograms Using Pre-Trained Deep Neural Networks
    Prates, Rodrigo Leite
    Gomez-Flores, Wilfrido
    Pereira, Wagner
    2021 18TH INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, COMPUTING SCIENCE AND AUTOMATIC CONTROL (CCE 2021), 2021,
  • [6] Comparative Analysis of Pre-trained Deep Neural Networks for Plant Disease Classification
    George, Romiyal
    Thuseethan, Selvarajah
    Ragel, Roshan G.
    2024 21ST INTERNATIONAL JOINT CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING, JCSSE 2024, 2024, : 179 - 186
  • [7] Recognizing Malaysia Traffic Signs with Pre-Trained Deep Convolutional Neural Networks
    How, Dickson Neoh Tze
    Sahari, Khairul Salleh Mohamed
    Hou, Yew Cheong
    Basubeit, Omar Gumaan Saleh
    2019 4TH INTERNATIONAL CONFERENCE ON CONTROL, ROBOTICS AND CYBERNETICS (CRC 2019), 2019, : 109 - 113
  • [8] Transfer Learning based Performance Comparison of the Pre-Trained Deep Neural Networks
    Kumar, Jayapalan Senthil
    Anuar, Syahid
    Hassan, Noor Hafizah
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (01) : 797 - 805
  • [9] Improving weeds identification with a repository of agricultural pre-trained deep neural networks
    Espejo-Garcia, Borja
    Mylonas, Nikolaos
    Athanasakos, Loukas
    Fountas, Spyros
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2020, 175
  • [10] Action Recognition in Videos Using Pre-Trained 2D Convolutional Neural Networks
    Kim, Jun-Hwa
    Won, Chee Sun
    IEEE ACCESS, 2020, 8 : 60179 - 60188