Extraction of soybean planting area based on feature fusion technology of multi-source low altitude unmanned aerial vehicle images

被引:18
作者
Yang, Qi [1 ]
She, Bao [1 ,2 ]
Huang, Linsheng [1 ]
Yang, Yuying [1 ]
Zhang, Gan [1 ]
Zhang, Mai [1 ]
Hong, Qi [1 ]
Zhang, Dongyan [1 ,3 ]
机构
[1] Anhui Univ, Natl Engn Res Ctr Agroecol Big Data Anal & Applica, Hefei 230601, Peoples R China
[2] Anhui Univ Sci & Technol, Sch Spatial Informat & Geomat Engn, Huainan 232001, Peoples R China
[3] Henan Univ, Key Lab Geospatial Technol Middle & Lower Yellow R, Minist Educ, Kaifeng 475004, Peoples R China
关键词
Remote sensing; UAV; U-Net; Vegetation index; Soybean; LIGHT; CLASSIFICATION; YIELD;
D O I
10.1016/j.ecoinf.2022.101715
中图分类号
Q14 [生态学(生物生态学)];
学科分类号
071012 ; 0713 ;
摘要
Soybean is an important food and oil crop in the world. It is of great significance to statics the planting scale accurately for optimizing the crop planting structure and world food security. The technology of accurately extracting the area of soybean planting areas at the field scale using UAV images combined with deep learning algorithms is important for the application. In this study, firstly, RGB images and multispectral images (RGN) were acquired simultaneously by the quad-rotor UAV DJ-Phantom4 Pro at a flying height of 200 m. And the features were extracted from the RGB and RGN images. Further, the fusion image of RGB + VIs and the fusion image of RGN + VIs were obtained by concatenating the band reflectivity of the original image with the calculated Vegetation Index (VI). Then, the soybean planting area was segmented from the feature fusion images by U-Net. And the accuracy of the two sensors was compared. The results showed that the Kappa coefficients obtained based on RGB image, RGN image, CME(the combination of CIVE, MExG, and ExGR), ODR(the combination of OSAVI, DVI, and RDVI), RGB + CME(the combination of RGB and CME), and RGN + ODR(the combination of RGN and ODR) were 0.8806, 0.9327, 0.8437, 0.9330, 0.9420, and 0.9238, respectively. The Kappa coefficient of the combination of the original image and the vegetation index was higher than the original image, indicating that the vegetation index calculation was beneficial to improving the soybean recognition accuracy of the U-Net model. Among them, the precision of the soybean planting area extracted from RGB + CME was the highest, and the Kappa coefficient was 0.9420. Finally, the soybean recognition accuracy of U-Net was compared with the results of DeepLabv3+, Random Forest, and Support Vector Machine. The accuracy of U-Net was the best. It can be concluded that this research proposed the method that was using U-Net trained the fusion image of the original image and vegetation index feature fusion image obtained by the UAV platform, which can effectively segment soybean planting areas. The conclusion of this work provided important technical support for farm level, family cooperatives, and other business entities to manage finely soybean planting and production at low cost.
引用
收藏
页数:12
相关论文
共 51 条
  • [1] UAV-hyperspectral imaging of spectrally complex environments
    Banerjee, Bikram Pratap
    Raval, Simit
    Cullen, P. J.
    [J]. INTERNATIONAL JOURNAL OF REMOTE SENSING, 2020, 41 (11) : 4136 - 4159
  • [2] Crop Classification in a Heterogeneous Arable Landscape Using Uncalibrated UAV Data
    Bohler, Jonas E.
    Schaepman, Michael E.
    Kneubuhler, Mathias
    [J]. REMOTE SENSING, 2018, 10 (08)
  • [3] Breiman L., 2001, Machine Learning, V45, P5
  • [4] Real-time image processing for crop/weed discrimination in maize fields
    Burgos-Artizzu, Xavier P.
    Ribeiro, Angela
    Guijarro, Maria
    Pajares, Gonzalo
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2011, 75 (02) : 337 - 346
  • [5] Semantic segmentation of high-resolution satellite images using deep learning
    Chaurasia, Kuldeep
    Nandy, Rijul
    Pawar, Omkar
    Singh, Ravi Ranjan
    Ahire, Meghana
    [J]. EARTH SCIENCE INFORMATICS, 2021, 14 (04) : 2161 - 2170
  • [6] Chen F, 2018, INT CONF GEOINFORM
  • [7] Chen J.M., 1996, CAN J REMOTE SENS, V22, P229, DOI [10.1080/07038992.1996.10855178, DOI 10.1080/07038992.1996.10855178]
  • [8] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
    Chen, Liang-Chieh
    Zhu, Yukun
    Papandreou, George
    Schroff, Florian
    Adam, Hartwig
    [J]. COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 833 - 851
  • [9] COLWELL J E, 1974, Remote Sensing of Environment, V3, P175, DOI 10.1016/0034-4257(74)90003-0
  • [10] A REVIEW OF ASSESSING THE ACCURACY OF CLASSIFICATIONS OF REMOTELY SENSED DATA
    CONGALTON, RG
    [J]. REMOTE SENSING OF ENVIRONMENT, 1991, 37 (01) : 35 - 46