iELMNet: Integrating Novel Improved Extreme Learning Machine and Convolutional Neural Network Model for Traffic Sign Detection

被引:6
作者
Batool, Aisha [1 ]
Nisar, Muhammad Wasif [1 ]
Shah, Jamal Hussain [1 ]
Khan, Muhammad Attique [2 ]
El-Latif, Ahmed A. Abd [3 ]
机构
[1] COMSATS Univ Islamabad, Dept Comp Sci, Wah Campus, Islamabad 45550, Punjab, Pakistan
[2] HITEC Univ Taxila, Dept Comp Sci, Taxila, Pakistan
[3] Menoufia Univ, Fac Sci, Dept Math & Comp Sci, Shibin Al Kawm, Egypt
关键词
convolutional neural network; iELMNet; improved extreme learning machine; scale transformation; traffic sign detection; CLASSIFICATION; SELECTION; FRAMEWORK; STRATEGY; FEATURES; ENTROPY; SYSTEM;
D O I
10.1089/big.2021.0279
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Traffic sign detection (TSD) in real-time environment holds great importance for applications such as automated-driven vehicles. Large variety of traffic signs, different appearances, and spatial representations causes a huge intraclass variation. In this article, an extreme learning machine (ELM), convolutional neural network (CNN), and scale transformation (ST)-based model, called improved extreme learning machine network, are proposed to detect traffic signs in real-time environment. The proposed model has a custom DenseNet-based novel CNN architecture, improved version of region proposal networks called accurate anchor prediction model (A2PM), ST, and ELM module. CNN architecture makes use of handcrafted features such as scale-invariant feature transform and Gabor to improvise the edges of traffic signs. The A2PM minimizes the redundancy among extracted features to make the model efficient and ST enables the model to detect traffic signs of different sizes. ELM module enhances the efficiency by reshaping the features. The proposed model is tested on three publicly available data sets, challenging unreal and real environments for traffic sign recognition, Tsinghua-Tencent 100K, and German traffic sign detection benchmark and achieves average precisions of 93.31%, 95.22%, and 99.45%, respectively. These results prove that the proposed model is more efficient than state-of-the-art sign detection techniques.
引用
收藏
页码:323 / 338
页数:16
相关论文
共 72 条
[1]  
Ahmed S., 2020, ARXIV200602578
[2]  
[Anonymous], 2017, ARXIV171202463
[3]  
[Anonymous], 2011, P 15 C COMPUTATIONAL
[4]   A novel approach for scene text extraction from synthesized hazy natural images [J].
Ansari, Ghulam Jillani ;
Shah, Jamal Hussain ;
Sharif, Muhammad ;
Rehman, Saeed Ur .
PATTERN ANALYSIS AND APPLICATIONS, 2020, 23 (03) :1305-1322
[5]   A novel machine learning approach for scene text extraction [J].
Ansari, Ghulam Jillani ;
Shah, Jamal Hussain ;
Yasmin, Mussarat ;
Sharif, Muhammad ;
Fernandes, Steven Lawrence .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2018, 87 :328-340
[6]  
Aoyagi Y, 1996, IEEE IND ELEC, P1838, DOI 10.1109/IECON.1996.570749
[7]   A multilevel paradigm for deep convolutional neural network features selection with an application to human gait recognition [J].
Arshad, Habiba ;
Khan, Muhammad Attique ;
Sharif, Muhammad Irfan ;
Yasmin, Mussarat ;
Tavares, Joao Manuel R. S. ;
Zhang, Yu-Dong ;
Satapathy, Suresh Chandra .
EXPERT SYSTEMS, 2022, 39 (07)
[8]   Pedestrian Detection Based on Light-Weighted Separable Convolution for Advanced Driver Assistance Systems [J].
Ayachi, Riadh ;
Said, Yahia ;
Ben Abdelaali, Abdessalem .
NEURAL PROCESSING LETTERS, 2020, 52 (03) :2655-2668
[9]   Multi-ROI Association and Tracking With Belief Functions: Application to Traffic Sign Recognition [J].
Boumediene, Mohammed ;
Lauffenburger, Jean-Philippe ;
Daniel, Jeremie ;
Cudel, Christophe ;
Ouamri, Abdelaziz .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2014, 15 (06) :2470-2479
[10]   The Relationship Between Age and Mental Health Among Adults in Iran During the COVID-19 Pandemic [J].
Chen, Jiyao ;
Zhang, Stephen X. ;
Wang, Yifei ;
Afshar Jahanshahi, Asghar ;
Mokhtari Dinani, Maryam ;
Nazarian Madavani, Abbas ;
Nawaser, Khaled .
INTERNATIONAL JOURNAL OF MENTAL HEALTH AND ADDICTION, 2022, 20 (05) :3162-3177