Self-supervised contrastive learning on agricultural images

被引:39
作者
Guldenring, Ronja [1 ]
Nalpantidis, Lazaros [1 ]
机构
[1] Dept Elect Engn, DK-2800 Lyngby, Denmark
基金
欧盟地平线“2020”;
关键词
Contrastive learning; Deep learning; Self-supervision; SwAV; Transfer-learning; SUGAR-BEET; CLASSIFICATION;
D O I
10.1016/j.compag.2021.106510
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Agriculture emerges as a prominent application domain for advanced computer vision algorithms. As much as deep learning approaches can help solve problems such as plant detection, they rely on the availability of large amounts of annotated images for training. However, relevant agricultural datasets are scarce and at the same time, generic well-established image datasets such as ImageNet do not necessarily capture the characteristics of agricultural environments. This observation has motivated us to explore the applicability of self-supervised contrastive learning on agricultural images. Our approach considers numerous non-annotated agricultural images, which are easy to obtain, and uses them to pre-train deep neural networks. We then require only a limited number of annotated images to fine-tune those networks in a supervised training manner for relevant downstream tasks, such as plant classification or segmentation. To the best of our knowledge, contrastive self-supervised learning has not been explored before in the area of agricultural images. Our results reveal that it outperforms conventional deep learning approaches in classification downstream tasks, especially for small amounts of available annotated training images where up to 14% increase of average top-1 classification accuracy has been observed. Furthermore, the computational cost for generating data-specific pre-trained weights is fairly low, allowing one to generate easily new pre-trained weights for any custom model architecture or task.
引用
收藏
页数:12
相关论文
共 52 条
[1]  
[Anonymous], 2002, Advances in Neural Information Processing Systems
[2]   Few-Shot Learning approach for plant disease classification using images taken in the field [J].
Argueso, David ;
Picon, Artzai ;
Irusta, Unai ;
Medela, Alfonso ;
San-Emeterio, Miguel G. ;
Bereciartua, Arantza ;
Alvarez-Gila, Aitor .
COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2020, 175
[3]  
Azizi S, 2021, Big self-supervised models advance medical image classification
[4]  
Bargoti Suchet, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P3626, DOI 10.1109/ICRA.2017.7989417
[5]   The Lovasz-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks [J].
Berman, Maxim ;
Triki, Amal Rannen ;
Blaschko, Matthew B. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4413-4421
[6]   Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture [J].
Bosilj, Petra ;
Aptoula, Erchan ;
Duckett, Tom ;
Cielniak, Grzegorz .
JOURNAL OF FIELD ROBOTICS, 2020, 37 (01) :7-19
[7]  
Caron M., 2020, ARXIV200609882, P1
[8]   Deep Clustering for Unsupervised Learning of Visual Features [J].
Caron, Mathilde ;
Bojanowski, Piotr ;
Joulin, Armand ;
Douze, Matthijs .
COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 :139-156
[9]   Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields [J].
Chebrolu, Nived ;
Lottes, Philipp ;
Schaefer, Alexander ;
Winterhalter, Wera ;
Burgard, Wolfram ;
Stachniss, Cyrill .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (10) :1045-1052
[10]  
Chen T, 2020, PR MACH LEARN RES, V119