Combining 2D image and point cloud deep learning to predict wheat above ground biomass

被引:0
|
作者
Zhu, Shaolong [1 ,2 ]
Zhang, Weijun [1 ,2 ]
Yang, Tianle [1 ,2 ]
Wu, Fei [3 ]
Jiang, Yihan [1 ,2 ]
Yang, Guanshuo [1 ,2 ]
Zain, Muhammad [1 ,2 ]
Zhao, Yuanyuan [1 ,2 ]
Yao, Zhaosheng [1 ,2 ]
Liu, Tao [1 ,2 ]
Sun, Chengming [1 ,2 ]
机构
[1] Yangzhou Univ, Coll Agr, Key Lab Crop Genet & Physiol Jiangsu Prov, Key Lab Crop Cultivat & Physiol Jiangsu Prov, Yangzhou 225009, Peoples R China
[2] Yangzhou Univ, Jiangsu Coinnovat Ctr Modern Prod Technol Grain Cr, Yangzhou 225009, Peoples R China
[3] Tech Univ Munich, Sch Life Sci, Precis Agr Lab, D-85354 Freising Weihenstephan, Germany
基金
中国国家自然科学基金;
关键词
Wheat; Biomass prediction; Unmanned aerial vehicle; Point cloud deep learning; Multimodal data fusion; VEGETATION INDEXES; CHLOROPHYLL CONTENT; WINTER-WHEAT; LEAF; RGB; CLASSIFICATION; SEGMENTATION; ALGORITHM; CANOPIES; TEXTURES;
D O I
10.1007/s11119-024-10186-1
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
PurposeThe use of Unmanned aerial vehicle (UAV) data for predicting crop above-ground biomass (AGB) is becoming a more feasible alternative to destructive methods. However, canopy height, vegetation index (VI), and other traditional features can become saturated during the mid to late stages of crop growth, significantly impacting the accuracy of AGB prediction.Methods In 2022 and 2023, UAV multispectral, RGB, and light detection and ranging point cloud data of wheat populations were collected at seven growth stages across two experimental fields. The point cloud depth features were extracted using the improved PointNet++ network, and AGB was predicted by fusion with VI, color index (CI), and texture index (TI) raster image features.ResultsThe findings indicate that when the point cloud depth features were fused, the R2 values predicted from VI, CI, TI, and canopy height model images increased by 0.05, 0.08, 0.06, and 0.07, respectively. For the combination of VI, CI, and TI, R2 increased from 0.86 to a maximum of 0.9, while the root-mean-square error (RMSE) and mean absolute error were 1.80 t ha-1 and 1.36 t ha-1, respectively. Additionally, our findings revealed that the hybrid fusion exhibits the highest accuracy, it demonstrates robust adaptability in predicting AGB across various years, growth stages, crop varieties, nitrogen fertilizer applications, and densities.Conclusion This study effectively addresses the saturation in spectral and chemical information, provides valuable insights for high-precision phenotyping and advanced crop field management, and serves as a reference for studying other crops and phenotypic parameters.
引用
收藏
页码:3139 / 3166
页数:28
相关论文
共 50 条
  • [21] Combining 2D to 2D and 3D to 2D Point Correspondences for Stereo Visual Odometry
    Manthe, Stephan
    Carrio, Adrian
    Neuhaus, Frank
    Campoy, Pascual
    Paulus, Dietrich
    PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2018), VOL 5: VISAPP, 2018, : 455 - 463
  • [22] Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review
    Cui, Yaodong
    Chen, Ren
    Chu, Wenbo
    Chen, Long
    Tian, Daxin
    Li, Ying
    Cao, Dongpu
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (02) : 722 - 739
  • [23] Estimating the above ground biomass of winter wheat using the Sentinel-2 data
    Zheng Y.
    Wu B.
    Zhang M.
    Yaogan Xuebao/J. Remote Sens., 2 (318-328): : 318 - 328
  • [24] Matching 2D Image Patches and 3D Point Cloud Volumes by Learning Local Cross-domain Feature Descriptors
    Liu, Weiquan
    Lai, Baiqi
    Wang, Cheng
    Bian, Xuesheng
    Wen, Chenglu
    Cheng, Ming
    Zang, Yu
    Xia, Yan
    Li, Jonathan
    2021 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS (VRW 2021), 2021, : 516 - 517
  • [25] Reinforcement learning particle swarm optimization based trajectory planning of autonomous ground vehicle using 2D LiDAR point cloud
    Ambuj
    Nagar, Harsh
    Paul, Ayan
    Machavaram, Rajendra
    Soni, Peeyush
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2024, 178
  • [26] Object defect detection based on data fusion of a 3D point cloud and 2D image
    Zhang, Wanning
    Zhou, Fuqiang
    Liu, Yang
    Sun, Pengfei
    Chen, Yuanze
    Wang, Lin
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2023, 34 (02)
  • [27] Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud Analysis
    Zhang, Qijian
    Hou, Junhui
    Qian, Yue
    Zeng, Yiming
    Zhang, Juyong
    He, Ying
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 9726 - 9742
  • [28] Enhancement of 3D Point Cloud Contents Using 2D Image Super Resolution Network
    Park, Seonghwan
    Kim, Junsik
    Hwang, Yonghae
    Suh, Doug Young
    Kim, Kyuheon
    JOURNAL OF WEB ENGINEERING, 2022, 21 (02): : 425 - 442
  • [29] Point2Mesh-Net: Combining Point Cloud and Mesh-Based Deep Learning for Cardiac Shape Reconstruction
    Beetz, Marcel
    Banerjee, Abhirup
    Grau, Vicente
    STATISTICAL ATLASES AND COMPUTATIONAL MODELS OF THE HEART: REGULAR AND CMRXMOTION CHALLENGE PAPERS, STACOM 2022, 2022, 13593 : 280 - 290
  • [30] Real-time 2D/ 3D image processing with deep learning
    Kim, Soo Kyun
    Choi, Min-Hyung
    Chun, Junchul
    Jia, Xibin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (28-29) : 35771 - 35771