Wild Mushroom Classification Based on Improved MobileViT Deep Learning

被引:4
作者
Peng, Youju [1 ]
Xu, Yang [1 ,2 ]
Shi, Jin [1 ]
Jiang, Shiyi [1 ]
机构
[1] Guizhou Univ, Coll Big Data & Informat Engn, Guiyang 550025, Peoples R China
[2] Guiyang Aluminum & Magnesium Design & Res Inst Co, Guiyang 550009, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 08期
关键词
attention mechanism; fine-grained; feature fusion; MobileViT;
D O I
10.3390/app13084680
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Wild mushrooms are not only tasty but also rich in nutritional value, but it is difficult for non-specialists to distinguish poisonous wild mushrooms accurately. Given the frequent occurrence of wild mushroom poisoning, we propose a new multidimensional feature fusion attention network (M-ViT) combining convolutional networks (ConvNets) and attention networks to compensate for the deficiency of pure ConvNets and pure attention networks. First, we introduced an attention mechanism Squeeze and Excitation (SE) module in the MobilenetV2 (MV2) structure of the network to enhance the representation of picture channels. Then, we designed a Multidimension Attention module (MDA) to guide the network to thoroughly learn and utilize local and global features through short connections. Moreover, using the Atrous Spatial Pyramid Pooling (ASPP) module to obtain longer distance relations, we fused the model features from different layers, and used the obtained joint features for wild mushroom classification. We validated the model on two datasets, mushroom and MO106, and the results showed that M-ViT performed the best on the two test datasets, with accurate dimensions of 96.21% and 91.83%, respectively. We compared the performance of our method with that of more advanced ConvNets and attention networks (Transformer), and our method achieved good results.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Classification of Microscopic Hyperspectral Images of Cancerous Tissue Based on Deep Learning
    Zhang Yong
    Huang Danfei
    Zhang Lechao
    Zhang Lili
    Zhou Yao
    Tang Hongyu
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (18)
  • [22] A Robust Deep Learning Framework Based on Spectrograms for Heart Sound Classification
    Chen, Junxin
    Guo, Zhihuan
    Xu, Xu
    Zhang, Li-bo
    Teng, Yue
    Chen, Yongyong
    Wozniak, Marcin
    Wang, Wei
    IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2024, 21 (04) : 936 - 947
  • [23] Brain Tumor Classification Based on Attention Guided Deep Learning Model
    Wen Jun
    Zheng Liyuan
    International Journal of Computational Intelligence Systems, 15
  • [24] A novel time representation input based on deep learning for ECG classification
    Huang, Youhe
    Li, Hongru
    Yu, Xia
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 83
  • [25] Deep Learning Feature Fusion-Based Retina Image Classification
    Zhang Tianfu
    Zhong Shuncong
    Lian Chaoming
    Zhou Ning
    Xie Maosong
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (24)
  • [26] Deep learning based human body segmentation for clothing fashion classification
    Zhang, Xiaoye
    Song, Chengfang
    Yang, Yingyi
    Zhang, Zheng
    Zhang, Xining
    Wang, Peng
    Zou, Qin
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 7544 - 7549
  • [27] Brain Tumor Classification Based on Attention Guided Deep Learning Model
    Wen Jun
    Zheng Liyuan
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2022, 15 (01)
  • [28] Research on steel surface defect classification method based on deep learning
    Gao, Yang
    Lv, Gang
    Xiao, Dong
    Han, Xize
    Sun, Tao
    Li, Zhenni
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [29] Paper: POI Classification Method Based on Feature Extension and Deep Learning
    Zhou, Chaoran
    Yang, Hang
    Zhao, Jianping
    Zhang, Xin
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2020, 24 (07) : 944 - 952
  • [30] White Blood Cells Image Classification Based on Radiomics and Deep Learning
    Wu, Wenna
    Liao, Shengwu
    Lu, Zhentai
    IEEE ACCESS, 2022, 10 : 124036 - 124052