Multi-task Visual Perception Method in Dragon Orchards Based on OrchardYOLOP

被引:0
|
作者
Zhao, Wenfeng [1 ]
Huang, Yuanjue [1 ]
Zhong, Minyue [1 ]
Li, Zhenyuan [1 ]
Luo, Zitao [1 ]
Huang, Jiajun [1 ]
机构
[1] College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou,510642, China
关键词
Autonomous driving - Complex terrains - Dragon orchard - Lighting environment - Multi tasks - Objects detection - Semantic segmentation - Unstructured environments - Visual perception - YOLOP;
D O I
10.6041/j.issn.1000-1298.2024.11.018
中图分类号
学科分类号
摘要
In the face of challenges such as complex terrains, fluctuating lighting, and unstructured environments, modern orchard robots require the efficient processing of a vast array of environmental information. Traditional algorithms that sequentially execute multiple single tasks are limited by computational power which are unable to meet these demands. Aiming to address the requirements for real-time performance and accuracy in multi-tasking autonomous driving robots within dragon fruit orchard environments. Building upon the YOLOP, focus attention convolution module was introduced, C2F and SPPF modules were employed, and the loss function for segmentation tasks was optimized, culminating in the OrchardYOLOP. Experiments demonstrated that OrchardYOLOP achieved a precision of 84. 1 % in target detection tasks, an mloU of 89. 7% in drivable area segmentation tasks, and an mloU increased to 90. 8% in fruit tree region segmentation tasks, with an inference speed of 33. 33 frames per second and a parameter count of only 9. 67 X 10 . Compared with the YOLOP algorithm, not only did it meet the realtime requirements in terms of speed, but also it significantly improved accuracy, addressing key issues in multi-task visual perception in dragon fruit orchards and providing an effective solution for multi-task autonomous driving visual perception in unstructured environments. © 2024 Chinese Society of Agricultural Machinery. All rights reserved.
引用
收藏
页码:160 / 170
相关论文
共 50 条
  • [41] Multiple object tracking method based on multi-task joint learning
    Qu Y.
    Li W.-H.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2023, 53 (10): : 2932 - 2941
  • [42] A Speech Enhancement Method Based on Multi-Task Bayesian Compressive Sensing
    You, Hanxu
    Ma, Zhixian
    Li, Wei
    Zhu, Jie
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2017, E100D (03): : 556 - 563
  • [43] An Analogical Reasoning Method Based on Multi-task Learning with Relational Clustering
    Li, Shuyi
    Wu, Shaojuan
    Zhang, Xiaowang
    Feng, Zhiyong
    COMPANION OF THE WORLD WIDE WEB CONFERENCE, WWW 2023, 2023, : 144 - 147
  • [44] Nuclear mass based on the multi-task learning neural network method
    Xing-Chen Ming
    Hong-Fei Zhang
    Rui-Rui Xu
    Xiao-Dong Sun
    Yuan Tian
    Zhi-Gang Ge
    Nuclear Science and Techniques, 2022, 33
  • [45] A Multi-Task Network Based on Dual-Neck Structure for Autonomous Driving Perception
    Tan, Guopeng
    Wang, Chao
    Li, Zhihua
    Zhang, Yuanbiao
    Li, Ruikai
    SENSORS, 2024, 24 (05)
  • [46] A multi-task deep learning based vulnerability severity prediction method
    Shan, Chun
    Zhang, Ziyi
    Zhou, Siyi
    2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 307 - 315
  • [47] Unsupervised domain adaptation: A multi-task learning-based method
    Zhang, Jing
    Li, Wanqing
    Ogunbona, Philip
    KNOWLEDGE-BASED SYSTEMS, 2019, 186
  • [48] Multi-Task Learning Tracking Method Based on the Similarity of Dynamic Samples
    Shi Zaifeng
    Sun Cheng
    Cao Qingjie
    Wang Zhe
    Fan Qiangqiang
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (16)
  • [49] Few-Shot KBQA Method Based on Multi-Task Learning
    Ren, Yuan
    Li, Xutong
    Liu, Xudong
    Zhang, Richong
    2024 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING, IEEE BIGCOMP 2024, 2024, : 226 - 233
  • [50] Nuclear mass based on the multi-task learning neural network method
    Xing-Chen Ming
    Hong-Fei Zhang
    Rui-Rui Xu
    Xiao-Dong Sun
    Yuan Tian
    Zhi-Gang Ge
    NuclearScienceandTechniques, 2022, 33 (04) : 95 - 102