Domain Adaptation for Plant Organ Detection with Style Transfer

被引:0
|
作者
James, Chrisbin [1 ]
Gu, Yanyang [2 ]
Chapman, Scott [1 ]
Guo, Wei [3 ]
David, Etienne [4 ,5 ]
Madec, Simon [4 ,5 ]
Potgieter, Andries [6 ]
Eriksson, Anders [2 ]
机构
[1] Univ Queensland, Sch Agr & Food Sci, Fac Sci, Brisbane, Qld, Australia
[2] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Qld, Australia
[3] Univ Tokyo, Grad Sch Agr & Life Sci, Tokyo, Japan
[4] Arvalis Inst Vegetal, Paris, France
[5] INRAE, EMMAH UMR1114, Avignon, France
[6] Univ Queensland, Queensland Alliance Agr & Food Innovat, Ctr Crop Sci, Brisbane, Qld, Australia
来源
2021 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA 2021) | 2021年
关键词
sorghum head detection; wheat head detection; domain adaptation; style transfer;
D O I
10.1109/DICTA52665.2021.9647293
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning based detection of sorghum panicles has been proposed to replace manual counting in field trials. However, model performance is highly sensitive to domain shift between training datasets associated with differences in genotypes, field conditions, and various lighting conditions. As labelling such datasets is expensive and laborious, we propose a pipeline of Contrastive Unpaired Translation (CUT) based domain adaptation method to improve detection performance in new datasets, including for completely different crop species. Firstly, original dataset is translated to other styles using CUT trained on unlabelled datasets from other domains. Then labels are corrected after synthesis of the new domain dataset. Finally, detectors are retrained on the synthesized dataset. Experiments show that, in case of sorghum panicles, the accuracy of the models when trained with synthetic images improve by fifteen to twenty percent. Furthermore, the models are more robust towards change in prediction thresholds. Hence, demonstrating the effectiveness of the pipeline.
引用
收藏
页码:557 / 565
页数:9
相关论文
共 50 条
  • [1] Progressive learning with style transfer for distant domain adaptation
    Xiang, Suncheng
    Fu, Yuzhuo
    Liu, Ting
    IET IMAGE PROCESSING, 2020, 14 (14) : 3527 - 3535
  • [2] Domain Adaptation Meets Disentangled Representation Learning and Style Transfer
    Vu-Hoang Tran
    Huang, Ching-Chun
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 2998 - 3005
  • [3] Continual Unsupervised Domain Adaptation for Semantic Segmentation by Online Frequency Domain Style Transfer
    Termoehlen, Jan-Aike
    Klingner, Marvin
    Brettin, Leon J.
    Schmidt, Nico M.
    Fingscheidt, Tim
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 2881 - 2888
  • [4] Target-Style-Aware Unsupervised Domain Adaptation for Object Detection
    Yun, Woo-han
    Han, ByungOk
    Lee, Jaeyeon
    Kim, Jaehong
    Kim, Junmo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 3825 - 3832
  • [5] Domain-Robust Mitotic Figure Detection with Style Transfer
    Chung, Youjin
    Cho, Jihoon
    Park, Jinah
    BIOMEDICAL IMAGE REGISTRATION, DOMAIN GENERALISATION AND OUT-OF-DISTRIBUTION ANALYSIS, 2022, 13166 : 23 - 31
  • [6] Style transfer-based domain adaptation or vegetation segmentation with optical imagery
    Schenkel, Fabian
    Hinz, Stefan
    Middelmann, Wolfgang
    APPLIED OPTICS, 2021, 60 (22) : F109 - F117
  • [7] Non-parallel text style transfer with domain adaptation and an attention model
    Mingxuan Hu
    Min He
    Applied Intelligence, 2021, 51 : 4609 - 4622
  • [8] Non-parallel text style transfer with domain adaptation and an attention model
    Hu, Mingxuan
    He, Min
    APPLIED INTELLIGENCE, 2021, 51 (07) : 4609 - 4622
  • [9] MoDA: Map Style Transfer for Self-supervised Domain Adaptation of Embodied Agents
    Lee, Eun Sun
    Kim, Junho
    Park, SangWon
    Kim, Young Min
    COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 338 - 354
  • [10] Correction to: Non-parallel text style transfer with domain adaptation and an attention model
    Mingxuan Hu
    Min He
    Applied Intelligence, 2021, 51 : 8564 - 8564