RETRACTED: Deep Learning-Based Leaf Region Segmentation Using High-Resolution Super HAD CCD and ISOCELL GW1 Sensors (Retracted Article)

被引:2
作者
Talasila, Srinivas [1 ,2 ]
Rawal, Kirti [1 ]
Sethi, Gaurav [1 ]
机构
[1] Lovely Profess Univ, Sch Elect & Elect Engn, Phagwara, Punjab, India
[2] VNR Vignana Jyothi Inst Engn & Technol, Hyderabad, Telangana, India
关键词
IMAGES; CLASSIFICATION;
D O I
10.1155/2023/1085735
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Super HAD CCD and ISOCELL GW1 imaging sensors are used for capturing images in high-resolution cameras nowadays. These high-resolution camera sensors were used in this work to acquire black gram plant leaf diseased images in natural cultivation fields. Segmenting plant leaf regions from the black gram cultivation field images is a preliminary step for disease identification and classification. It is also helpful for the farmers to assess the plants' health and identify the diseases in their early stages. Even though plant leaf region segmentation has been effectively handled in many contributions, no universally applicable solution exists to solve all issues. Therefore, an approach for extracting leaf region from black gram plant leaf images is presented in this article. The novelty of the proposed method is that MobileNetV2 has been utilized as a backbone network for DeepLabv3+ layers to segment plant leaf regions. The DeepLabv3+ with MobileNetV2 segmentation model exhibited superior performance compared to the other models (SegNet, U-Net, DeepLabv3+ with ResNet18, ResNet50, Xception, and InceptionResNetV2) in terms of accuracy of 99.71%, Dice of 98.72%, and Jaccard/IoU of 97.47% when data augmentation was applied. The algorithms were developed and trained using MATLAB software. Each of the experimental trials reported in this article surpasses the prior findings.
引用
收藏
页数:20
相关论文
共 39 条
  • [1] An ensemble architecture of deep convolutional Segnet and Unet networks for building semantic segmentation from high-resolution aerial images
    Abdollahi, Abolfazl
    Pradhan, Biswajeet
    Alamri, Abdullah M.
    [J]. GEOCARTO INTERNATIONAL, 2022, 37 (12) : 3355 - 3370
  • [2] SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
    Badrinarayanan, Vijay
    Kendall, Alex
    Cipolla, Roberto
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2481 - 2495
  • [3] Buoncompagni S., 2015, P BRIT MACHINE VISIO, P1331, DOI 10.5244/C.29.133
  • [4] Cerutti Guillaume, 2011, Advances in Visual Computing. Proceedings 7th International Symposium, ISVC 2011, P202
  • [5] Identifying crop water stress using deep learning models
    Chandel, Narendra Singh
    Chakraborty, Subir Kumar
    Rajwade, Yogesh Anand
    Dubey, Kumkum
    Tiwari, Mukesh K.
    Jat, Dilip
    [J]. NEURAL COMPUTING & APPLICATIONS, 2021, 33 (10) : 5353 - 5367
  • [6] Chen LC, 2017, Arxiv, DOI arXiv:1706.05587
  • [7] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
    Chen, Liang-Chieh
    Zhu, Yukun
    Papandreou, George
    Schroff, Florian
    Adam, Hartwig
    [J]. COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 833 - 851
  • [8] Automatic and Reliable Leaf Disease Detection Using Deep Learning Techniques
    Chowdhury, Muhammad E. H.
    Rahman, Tawsifur
    Khandakar, Amith
    Ayari, Mohamed Arselene
    Khan, Aftab Ullah
    Khan, Muhammad Salman
    Al-Emadi, Nasser
    Reaz, Mamun Bin Ibne
    Islam, Mohammad Tariqul
    Ali, Sawal Hamid Md
    [J]. AGRIENGINEERING, 2021, 3 (02): : 294 - 312
  • [9] Segmentation of Multiple Tree Leaves Pictures with Natural Backgrounds using Deep Learning for Image-Based Agriculture Applications
    Gimenez-Gallego, Jaime
    Gonzalez-Teruel, Juan D.
    Jimenez-Buendia, Manuel
    Toledo-Moreo, Ana B.
    Soto-Valles, Fulgencio
    Torres-Sanchez, Roque
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (01):
  • [10] A survey of deep learning techniques for autonomous driving
    Grigorescu, Sorin
    Trasnea, Bogdan
    Cocias, Tiberiu
    Macesanu, Gigel
    [J]. JOURNAL OF FIELD ROBOTICS, 2020, 37 (03) : 362 - 386