Deep Learning and Phenology Enhance Large-Scale Tree Species Classification in Aerial Imagery during a Biosecurity Response

被引:12
|
作者
Pearse, Grant D. [1 ]
Watt, Michael S. [2 ]
Soewarto, Julia [1 ]
Tan, Alan Y. S. [1 ]
机构
[1] Scion, Private Bag 3020, Rotorua 3046, New Zealand
[2] Scion, 10 Kyle St, Christchurch 8011, New Zealand
关键词
tree species; classification; deep learning; convolutional networks; biosecurity; forest pathology; myrtle rust; urban forestry; machine learning; aerial imagery; PUCCINIA-PSIDII; MYRTLE RUST; TROPICAL FORESTS; MANAGEMENT; LEAF; INDEXES;
D O I
10.3390/rs13091789
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
The ability of deep convolutional neural networks (deep learning) to learn complex visual characteristics offers a new method to classify tree species using lower-cost data such as regional aerial RGB imagery. In this study, we use 10 cm resolution imagery and 4600 trees to develop a deep learning model to identify Metrosideros excelsa (pohutukawa)-a culturally important New Zealand tree that displays distinctive red flowers during summer and is under threat from the invasive pathogen Austropuccinia psidii (myrtle rust). Our objectives were to compare the accuracy of deep learning models that could learn the distinctive visual characteristics of the canopies with tree-based models (XGBoost) that used spectral and textural metrics. We tested whether the phenology of pohutukawa could be used to enhance classification by using multitemporal aerial imagery that showed the same trees with and without widespread flowering. The XGBoost model achieved an accuracy of 86.7% on the dataset with strong phenology (flowering). Without phenology, the accuracy fell to 79.4% and the model relied on the blueish hue and texture of the canopies. The deep learning model achieved 97.4% accuracy with 96.5% sensitivity and 98.3% specificity when leveraging phenology-even though the intensity of flowering varied substantially. Without strong phenology, the accuracy of the deep learning model remained high at 92.7% with sensitivity of 91.2% and specificity of 94.3% despite significant variation in the appearance of non-flowering pohutukawa. Pooling time-series imagery did not enhance either approach. The accuracy of XGBoost and deep learning models were, respectively, 83.2% and 95.2%, which were of intermediate precision between the separate models.
引用
收藏
页数:16
相关论文
共 45 条
  • [31] Large-scale land use/land cover extraction from Landsat imagery using feature relationships matrix based deep-shallow learning
    Dou, Peng
    Shen, Huanfeng
    Huang, Chunlin
    Li, Zhiwei
    Mao, Yujun
    Li, Xinghua
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 129
  • [32] Large-Scale Text Classification Using Scope-Based Convolutional Neural Network: A Deep Learning Approach
    Wang, Jiaying
    Li, Yaxin
    Shan, Jing
    Bao, Jinling
    Zong, Chuanyu
    Zhao, Liang
    IEEE ACCESS, 2019, 7 : 171548 - 171558
  • [33] Deep Learning for Large-Scale Real-World ACARS and ADS-B Radio Signal Classification
    Chen, Shichuan
    Zheng, Shilian
    Yang, Lifeng
    Yang, Xiaoniu
    IEEE ACCESS, 2019, 7 : 89256 - 89264
  • [34] Fine-Grained Large-Scale Vulnerable Communities Mapping via Satellite Imagery and Population Census Using Deep Learning
    Salas, Joaquin
    Vera, Pablo
    Zea-Ortiz, Marivel
    Villasenor, Elio-Atenogenes
    Pulido, Dagoberto
    Figueroa, Alejandra
    REMOTE SENSING, 2021, 13 (18)
  • [35] Rapid and large-scale mapping of flood inundation via integrating spaceborne synthetic aperture radar imagery with unsupervised deep learning
    Jiang, Xin
    Liang, Shijing
    He, Xinyue
    Ziegler, Alan D.
    Lin, Peirong
    Pan, Ming
    Wang, Dashan
    Zou, Junyu
    Hao, Dalei
    Mao, Ganquan
    Zeng, Yelu
    Yin, Jie
    Feng, Lian
    Miao, Chiyuan
    Wood, Eric F.
    Zeng, Zhenzhong
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2021, 178 : 36 - 50
  • [36] Large-scale deep learning based binary and semantic change detection in ultra high resolution remote sensing imagery: From benchmark datasets to urban application
    Tian, Shiqi
    Zhong, Yanfei
    Zheng, Zhuo
    Ma, Ailong
    Tan, Xicheng
    Zhang, Liangpei
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2022, 193 : 164 - 186
  • [37] Deep Learning-Based Land Cover Extraction from Very-High-Resolution Satellite Imagery for Assisting Large-Scale Topographic Map Production
    Hakim, Yofri Furqani
    Tsai, Fuan
    REMOTE SENSING, 2025, 17 (03)
  • [38] Large-Scale Mapping of Tree Species and Dead Trees in Sumava National Park and Bavarian Forest National Park Using Lidar and Multispectral Imagery
    Krzystek, Peter
    Serebryanyk, Alla
    Schnoerr, Claudius
    Cervenka, Jaroslav
    Heurich, Marco
    REMOTE SENSING, 2020, 12 (04)
  • [39] Evaluating the Effectiveness of Machine Learning and Deep Learning Models Combined Time-Series Satellite Data for Multiple Crop Types Classification over a Large-Scale Region
    Wang, Xue
    Zhang, Jiahua
    Xun, Lan
    Wang, Jingwen
    Wu, Zhenjiang
    Henchiri, Malak
    Zhang, Shichao
    Zhang, Sha
    Bai, Yun
    Yang, Shanshan
    Li, Shuaishuai
    Yu, Xiang
    REMOTE SENSING, 2022, 14 (10)
  • [40] UAV4TREE: DEEP LEARNING-BASED SYSTEM FOR AUTOMATIC CLASSIFICATION OF TREE SPECIES USING RGB OPTICAL IMAGES OBTAINED BY AN UNMANNED AERIAL VEHICLE
    Pierdicca, Roberto
    Nepi, Lindo
    Mancini, Adriano
    Malinverni, Eva Savina
    Balestra, Mattia
    GEOSPATIAL WEEK 2023, VOL. 10-1, 2023, : 1089 - 1096