Neural Architecture Search Net-Based Feature Extraction With Modular Neural Network for Image Classification of Copper/ Cobalt Raw Minerals

被引:10
作者
Dahouda, Mwamba Kasongo [1 ]
Joe, Inwhee [1 ]
机构
[1] Hanyang Univ, Dept Comp Sci, Seoul 04763, South Korea
关键词
Feature extraction; Neural networks; Image classification; Minerals; Deep learning; Convolutional neural networks; Training; Image preprocessing; feature extraction; deep learning; machine learning;
D O I
10.1109/ACCESS.2022.3187420
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Image processing is one of the most rapidly evolving technologies today, and it is an approach for applying operations on an image to improve it or extract relevant information from it. This is a critical research field in the engineering and computer sciences. However, analyzing a large number of variables demands a lot of memory and processing resources, which can cause a classification algorithm to overfit the training samples and underfit the test samples. As a result, various strategies, such as extraction, can be used to reduce the number of features in a dataset by producing new features from old ones. In this paper, we first propose a deep learning-based feature extraction approach with a modular neural network, where we employ a pre-trained neural architecture search net (NASNet) as a feature extractor on a custom dataset of raw copper and cobalt images. It allows the input image to be feed-forwarded while performing feature learning and feature map and then stops at a pooling layer before the fully connected (FC) layer in the NASNet to extract and save the outputs of that layer in dumped files. Second, the extracted features are used as training data to build a deep neural network and machine learning algorithms for the image classification of copper and cobalt raw minerals. The experimental results show that the NASNet extracts the features efficiently, and the proposed modular neural network performs well with the boosting-decision tree as a classifier, which gives higher accuracy of 91% than 90% of the deep neural network; moreover, the precision is 1 higher than 0.98 for the deep neural network.
引用
收藏
页码:72253 / 72262
页数:10
相关论文
共 21 条
[1]  
Amer M, 2019, Arxiv, DOI arXiv:1904.12770
[2]  
[Anonymous], 2014, COMPUT RES REPOSITOR
[3]  
[Anonymous], 2008, COMPUT VIS IMAGE UND, DOI DOI 10.1016/j.cviu.2007.09.014
[4]   BRIEF: Binary Robust Independent Elementary Features [J].
Calonder, Michael ;
Lepetit, Vincent ;
Strecha, Christoph ;
Fua, Pascal .
COMPUTER VISION-ECCV 2010, PT IV, 2010, 6314 :778-792
[5]  
Chollet F, 2017, Arxiv, DOI [arXiv:1610.02357, DOI 10.48550/ARXIV.1610.02357]
[6]  
DeTone D, 2018, Arxiv, DOI arXiv:1712.07629
[7]  
Dusmanu M, 2019, Arxiv, DOI [arXiv:1905.03561, DOI 10.48550/ARXIV.1905.03561]
[8]   DESIGN AND EVOLUTION OF MODULAR NEURAL-NETWORK ARCHITECTURES [J].
HAPPEL, BLM ;
MURRE, JMJ .
NEURAL NETWORKS, 1994, 7 (6-7) :985-1004
[9]  
He KM, 2015, Arxiv, DOI arXiv:1512.03385
[10]  
Huang G, 2018, Arxiv, DOI arXiv:1608.06993