Robust weed recognition through color based image segmentation and convolution neural network based classification

被引:0
|
作者
Khan, M. Nazmuzzaman [1 ]
Anwar, Sohel [1 ]
机构
[1] Indiana Univ Purdue Univ Indianapolis, Mech & Energy Engn Dept, Indiana, PA 46202 USA
关键词
Image-segmentation; image-classification; precision-farming;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Current image classification techniques for weed detection (classic vision techniques and deep-neural net) provide encouraging results under controlled environment. But most of the algorithms are not robust enough for real-world application. Different lighting conditions and shadows directly impact vegetation color. Varying outdoor lighting conditions create different colors, noise levels, contrast and brightness. High component of illumination causes sensor (industrial camera) saturation. As a result, threshold-based classification algorithms usually fail. To overcome this shortfall, we used visible spectral-index based segmentation to segment the weeds from background. Mean, variance, kurtosis, and skewness are calculated for each input image and image quality (good or bad) is determined. Bad quality image is converted to good-quality image using contrast limited adaptive histogram equalization (CLAHE) before segmentation. A convolution neural network (CNN) based classifier is then trained to classify three different types of weed (Ragweed, Pigweed and Cocklebur) common in a corn field. The main objective of this work is to construct a robust classifier, capable of classifying between three weed species in the presence of occlusion, noise, illumination variation, and motion blurring. Proposed histogram statistics-based image enhancement process solved weed mis-segmentation under extreme lighting condition. CNN based classifier shows accurate, robust classification under low-to-mid level motion blurring and various levels of noise.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Hyperspectral Image Classification Based on Hyperpixel Segmentation and Convolutional Neural Network
    Chen, Rujun
    Pu, Yunwei
    Wu, Fengzhen
    Liu, Yuceng
    Qi, Li
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (16)
  • [32] Semantic Segmentation Based on Deep Convolution Neural Network
    Shan, Jichao
    Li, Xiuzhi
    Jia, Songmin
    Zhang, Xiangyin
    3RD ANNUAL INTERNATIONAL CONFERENCE ON INFORMATION SYSTEM AND ARTIFICIAL INTELLIGENCE (ISAI2018), 2018, 1069
  • [33] Convolutional Neural Network Image Classification Based on Different Color Spaces
    Xian, Zixiang
    Huang, Rubing
    Towey, Dave
    Yue, Chuan
    TSINGHUA SCIENCE AND TECHNOLOGY, 2025, 30 (01): : 402 - 417
  • [34] Oil Spill Segmentation of SAR Image Based on Improved Deep Convolution Neural Network
    Luo, Dan
    Gu, Chunliang
    Chen, Peng
    Yang, Jingsong
    Yuan, Yeping
    Zheng, Gang
    Ren, Lin
    2021 PHOTONICS & ELECTROMAGNETICS RESEARCH SYMPOSIUM (PIERS 2021), 2021, : 2232 - 2237
  • [35] Noise-robust speech recognition in mobile network based on convolution neural networks
    Lallouani Bouchakour
    Mohamed Debyeche
    International Journal of Speech Technology, 2022, 25 : 269 - 277
  • [36] Noise-robust speech recognition in mobile network based on convolution neural networks
    Bouchakour, Lallouani
    Debyeche, Mohamed
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2022, 25 (01) : 269 - 277
  • [37] Color image hybrid noise filtering algorithm based on deep convolution neural network
    Yu, Yongfei
    Yan, Yuanjian
    SYSTEMS AND SOFT COMPUTING, 2024, 6
  • [38] A neural network based color document segmentation
    Han, HY
    IS&T'S NIP19: INTERNATIONAL CONFERENCE ON DIGITAL PRINTING TECHNOLOGIES, 2003, : 859 - 864
  • [39] Deep convolution neural network for image recognition
    Traore, Boukaye Boubacar
    Kamsu-Foguem, Bernard
    Tangara, Fana
    ECOLOGICAL INFORMATICS, 2018, 48 : 257 - 268
  • [40] A novel image segmentation combined color recognition algorithm through boundary detection and deep neural network
    Liu Y.
    Yang J.
    Guo B.
    Yang J.
    Zhang X.
    International Journal of Multimedia and Ubiquitous Engineering, 2016, 11 (02): : 331 - 342