PhenoNet: A two-stage lightweight deep learning framework for real-time wheat phenophase classification

被引:8
作者
Zhang, Ruinan [1 ]
Jin, Shichao [1 ]
Zhang, Yuanhao [1 ]
Zang, Jingrong [1 ]
Wang, Yu [1 ]
Li, Qing [1 ]
Sun, Zhuangzhuang [1 ]
Wang, Xiao [1 ]
Zhou, Qin [1 ]
Cai, Jian [1 ]
Xu, Shan [1 ]
Su, Yanjun [2 ]
Wu, Jin [3 ]
Jiang, Dong [1 ]
机构
[1] Nanjing Agr Univ, Acad Adv Interdisciplinary Studies, Plant Phen Res Ctr,Collaborat Innovat Ctr Modern C, Coll Agr,State Key Lab Crop Genet & Germplasm Enha, Nanjing 210095, Peoples R China
[2] Chinese Acad Sci, Inst Bot, State Key Lab Vegetat & Environm Change, Beijing 100093, Peoples R China
[3] Univ Hong Kong, Inst Climate & Carbon Neutral, Sch Biol Sci, Pokfulam Rd, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Wheat phenology; Dataset; Image classification; Deep learning; Transfer learning; Web application; PHENOLOGY; SIMULATION;
D O I
10.1016/j.isprsjprs.2024.01.006
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
The real-time monitoring of wheat phenology variations among different varieties and their adaptive responses to environmental conditions is essential for advancing breeding efforts and improving cultivation management. Many remote sensing efforts have been made to relieve the challenges of key phenophase detection. However, existing solutions are not accurate enough to discriminate adjacent phenophases with subtle organ changes, and they are not real-time, such as the vegetation index curve -based methods relying on entire growth stage data after the experiment was finished. Furthermore, it is key to improving the efficiency, scalability, and availability of phenological studies. This study proposes a two -stage deep learning framework called PhenoNet for the accurate, efficient, and real-time classification of key wheat phenophases. PhenoNet comprises a lightweight encoder module (PhenoViT) and a long short-term memory (LSTM) module. The performance of PhenoNet was assessed using a well -labeled, multi -variety, and large -volume dataset (WheatPheno). The results show that PhenoNet achieved an overall accuracy (OA) of 0.945, kappa coefficients (Kappa) of 0.928, and F1 -score (F1) of 0.941. Additionally, the network parameters (Params), number of operations measured by multiply -adds (MAdds), and graphics processing unit memory required for classification (Memory) were 0.889 million (M), 0.093 Giga times (G), and 8.0 Megabytes (MB), respectively. PhenoNet outperformed eleven state-of-the-art deep learning networks, achieving an average improvement of 3.7% in OA, 5.1% in Kappa, and 4.1% in F1, while reducing average Params, MAdds, and Memory by 78.4%, 85.0%, and 75.1%, respectively. The feature visualization and ablation analysis explained that PhenoNet mainly benefited from using time -series information and lightweight modules. Furthermore, PhenoNet can be effectively transferred across years, achieving a high OA of 0.981 using a two -stage transfer learning strategy. Furthermore, an extensible web platform that integrates WheatPheno and PhenoNet and ensures that the work done in this study is accessible, interoperable, and reusable has been developed (https://phenonet.org/).
引用
收藏
页码:136 / 157
页数:22
相关论文
共 94 条
  • [1] Nutrient Status Diagnosis of Infield Oilseed Rape via Deep Learning-Enabled Dynamic Model
    Abdalla, Alwaseela
    Cen, Haiyan
    Wan, Liang
    Mehmood, Khalid
    He, Yong
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (06) : 4379 - 4389
  • [2] Using Ordinary Digital Cameras in Place of Near-Infrared Sensors to Derive Vegetation Indices for Phenology Studies of High Arctic Vegetation
    Anderson, Helen B.
    Nilsen, Lennart
    Tommervik, Hans
    Karlsen, Stein Rune
    Nagai, Shin
    Cooper, Elisabeth J.
    [J]. REMOTE SENSING, 2016, 8 (10)
  • [3] Transfer learning approach based on satellite image time series for the crop classification problem
    Antonijevic, Ognjen
    Jelic, Slobodan
    Bajat, Branislav
    Kilibarda, Milan
    [J]. JOURNAL OF BIG DATA, 2023, 10 (01)
  • [4] ViViT: A Video Vision Transformer
    Arnab, Anurag
    Dehghani, Mostafa
    Heigold, Georg
    Sun, Chen
    Lucic, Mario
    Schmid, Cordelia
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6816 - 6826
  • [5] Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification
    Arnal Barbedo, Jayme Garcia
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2018, 153 : 46 - 53
  • [6] Ba Jimmy Lei, 2016, arXiv
  • [7] Understanding Robustness of Transformers for Image Classification
    Bhojanapalli, Srinadh
    Chakrabarti, Ayan
    Glasner, Daniel
    Li, Daliang
    Unterthiner, Thomas
    Veit, Andreas
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10211 - 10221
  • [8] Genetic Programming-Based Discriminative Feature Learning for Low-Quality Image Classification
    Bi, Ying
    Xue, Bing
    Zhang, Mengjie
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (08) : 8272 - 8285
  • [9] Monitoring agroecosystem productivity and phenology at a national scale: A metric assessment framework
    Browning, Dawn M.
    Russell, Eric S.
    Ponce-Campos, Guillermo E.
    Kaplan, Nicole
    Richardson, Andrew D.
    Seyednasrollah, Bijan
    Spiegal, Sheri
    Saliendra, Nicanor
    Alfieri, Joseph G.
    Baker, John
    Bernacchi, Carl
    Bestelmeyer, Brandon T.
    Bosch, David
    Boughton, Elizabeth H.
    Boughton, Raoul K.
    Clark, Pat
    Flerchinger, Gerald
    Gomez-Casanovas, Nuria
    Goslee, Sarah
    Haddad, Nick M.
    Hoover, David
    Jaradat, Abdullah
    Mauritz, Marguerite
    McCarty, Gregory W.
    Miller, Gretchen R.
    Sadler, John
    Saha, Amartya
    Scott, Russell L.
    Suyker, Andrew
    Tweedie, Craig
    Wood, Jeffrey D.
    Zhang, Xukai
    Taylor, Shawn D.
    [J]. ECOLOGICAL INDICATORS, 2021, 131
  • [10] Developing an integrated cloud-based spatial-temporal system for monitoring phenology
    Cope, M.
    Mikhailova, E.
    Post, C.
    Schlautman, M.
    McMillan, P.
    [J]. ECOLOGICAL INFORMATICS, 2017, 39 : 123 - 129