Extracting vegetation information from high dynamic range images with shadows: A comparison between deep learning and threshold methods

被引:11
作者
Wang, Zhe [1 ,2 ,6 ]
Chen, Wei [1 ]
Xing, Jianghe [1 ]
Zhang, Xuepeng [3 ]
Tian, Haijing [4 ]
Tang, Hongzhao [5 ]
Bi, Pengshuai [1 ]
Li, Guangchao [1 ]
Zhang, Fengjiao [1 ]
机构
[1] China Univ Min & Technol, Coll Geosci & Surveying Engn, Beijing 100083, Peoples R China
[2] Peking Univ, Sch Urban Planning & Design, Shenzhen Grad Sch, Shenzhen 518055, Peoples R China
[3] Res Ctr Big Data Technol, Nanhu Lab, Jiaxing 314000, Peoples R China
[4] Natl Forestry & Grassland Adm, Acad Inventory & Planning, Beijing 100714, Peoples R China
[5] Minist Nat Resources, Land Satellite Remote Sensing Applicat Ctr, Beijing 100048, Peoples R China
[6] Peking Univ, Shenzhen Grad Sch, Key Lab Earth Surface Syst & Human Earth Relat, Minist Nat Resources China, Shenzhen 518055, Peoples R China
关键词
Shadow effects; Vegetation extraction; High dynamic range; Deep learning; SEGMENTATION; COVER; LIGHTNESS; SYSTEM;
D O I
10.1016/j.compag.2023.107805
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Due to factors such as light angle and vegetation density, shadows can exist in ground vegetation images; this can lead to highly varying natural illumination in the images, which greatly limits the accuracy of fractional vege-tation cover (FVC) estimation. While high dynamic range (HDR) images can reduce the extreme illumination contrast, the characteristic vegetation information in shadow areas is often lost. At present, most threshold methods do not fully utilize the spatial features of vegetation, and the results of vegetation classification under complex shadow conditions are associated with large errors. Thus, two deep learning methods (the HDR U-Net method and HDR U-Net++ method) are proposed in this study based on fully convolutional neural networks to extract vegetation information from HDR images under shadow conditions. These methods are compared with threshold methods. For full HDR images, the kappa coefficient and mean intersection over union (MIoU) of the HDR U-Net method and HDR U-Net++ method are 0.926, 0.931, and 0.874, 0.887, respectively. The FVC estimation accuracy and vegetation segmentation effect of these two deep learning methods are better than those of the three threshold methods considered. In addition, compared with normal exposure (NE) images, camera -based HDR images improve the extraction accuracy of deep learning methods and threshold methods for vegetation under shadow conditions. The results of this study suggest that deep learning methods are not affected by the threshold determination strategy selected and can more completely extract vegetation feature information from HDR images. The classification effect of vegetation under shadow conditions is effectively improved, and the vegetation segmentation and FVC estimation results are more precise. The combination of HDR images and deep learning methods can be applied to field scenes with complex lighting, which is beneficial to improve the ground data verification of remote sensing FVC products.
引用
收藏
页数:12
相关论文
共 45 条
  • [1] Habitat-Net: Segmentation of habitat images using deep learning
    Abrams, Jesse F.
    Vashishtha, Anand
    Wong, Seth T.
    An Nguyen
    Mohamed, Azlan
    Wieser, Sebastian
    Kuijper, Arjan
    Wilting, Andreas
    Mukhopadhyay, Anirban
    [J]. ECOLOGICAL INFORMATICS, 2019, 51 : 121 - 128
  • [2] Deep semantic segmentation of natural and medical images: a review
    Asgari Taghanaki, Saeid
    Abhishek, Kumar
    Cohen, Joseph Paul
    Cohen-Adad, Julien
    Hamarneh, Ghassan
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2021, 54 (01) : 137 - 178
  • [3] RECENT ADVANCES IN HIGH DYNAMIC RANGE IMAGING TECHNOLOGY
    Bandoh, Yukihiro
    Qiu, Guoping
    Okuda, Masahiro
    Daly, Scott
    Aach, Til
    Au, Oscar C.
    [J]. 2010 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2010, : 3125 - 3128
  • [4] Bowman DMJS, 2001, GLOBAL ECOL BIOGEOGR, V10, P535, DOI 10.1046/j.1466-822X.2001.00252.x
  • [5] Uncovering Ecological Patterns with Convolutional Neural Networks
    Brodrick, Philip G.
    Davies, Andrew B.
    Asner, Gregory P.
    [J]. TRENDS IN ECOLOGY & EVOLUTION, 2019, 34 (08) : 734 - 745
  • [6] Shadow attenuation with high dynamic range images
    Cox, Samuel E.
    Booth, D. Terrance
    [J]. ENVIRONMENTAL MONITORING AND ASSESSMENT, 2009, 158 (1-4) : 231 - 241
  • [8] Strategy for the Development of a Smart NDVI Camera System for Outdoor Plant Detection and Agricultural Embedded Systems
    Dworak, Volker
    Selbeck, Joern
    Dammer, Karl-Heinz
    Hoffmann, Matthias
    Zarezadeh, Ali Akbar
    Bobda, Christophe
    [J]. SENSORS, 2013, 13 (02): : 1523 - 1538
  • [9] A survey on deep learning techniques for image and video semantic segmentation
    Garcia-Garcia, Alberto
    Orts-Escolano, Sergio
    Oprea, Sergiu
    Villena-Martinez, Victor
    Martinez-Gonzalez, Pablo
    Garcia-Rodriguez, Jose
    [J]. APPLIED SOFT COMPUTING, 2018, 70 : 41 - 65
  • [10] Crop/weed discrimination in perspective agronomic images
    Gee, Ch.
    Bossu, J.
    Jones, G.
    Truchetet, F.
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2008, 60 (01) : 49 - 59