STEFF: Spatio-temporal EfficientNet for dynamic texture classification in outdoor scenes

被引:2
作者
Mouhcine, Kaoutar [1 ]
Zrira, Nabila [2 ]
Elafi, Issam [3 ]
Benmiloud, Ibtissam [1 ]
Khan, Haris Ahmad [4 ,5 ]
机构
[1] Natl Super Sch Mines Rabat, CPS2E Lab, MECAtron Team, Rabat 10080, Morocco
[2] Natl Super Sch Mines Rabat, LISTD Lab, ADOS Team, Rabat 10080, Morocco
[3] Mohammed V Univ, Fac Sci, Lab Concept & Syst Elect Signals & Informat, Rabat 10102, Morocco
[4] Wageningen Univ & Res, Agr Biosyst Engn Grp, Wageningen, Netherlands
[5] Syngenta, Crop Protect Dev, Data Sci, Enkhuizen, Netherlands
关键词
STEFF; Dynamic texture; Outdoor scene classification; Deep learning; CNN; EfficientNet; Spatio-temporal features; FEATURE-EXTRACTION; FACIAL EXPRESSION; RECOGNITION; VIDEO; PATTERNS; FEATURES; DESCRIPTORS; OBJECT; MODEL;
D O I
10.1016/j.heliyon.2024.e25360
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In recent years, dynamic texture classification has become an important task for computer vision. This is a challenging task due to the unknown spatial and temporal nature of dynamic texture. To overcome this challenge, we investigate the potential of deep learning approaches and propose a novel spatio-temporal approach (STEFF) for dynamic texture classification that combines the representation power of motion and appearance using the difference and average operators between video sequences. In this work, we extract deep texture features from outdoor scenes and integrate both spatial and temporal features into a pre-trained Convolutional Neural Network model, namely EfficientNet, with a fine-tuning and regularization process. The robustness of the proposed approach is reflected in the promising result when comparing our method to the proposed architectures and other existing models. The experimental results on three datasets demonstrate the effectiveness and efficiency of the proposed approach. The accuracy percentages are 95.95%, 94.09%, and 98.01% on the outdoor scenes of Yupenn, DynTex++, and Yupenn++ datasets, respectively.
引用
收藏
页数:23
相关论文
共 119 条