DeepShip: An underwater acoustic benchmark dataset and a separable convolution based autoencoder for classification

被引:168
作者
Irfan, Muhammad [1 ]
Zheng Jiangbin [1 ]
Ali, Shahid [2 ]
Iqbal, Muhammad [3 ]
Masood, Zafar [1 ]
Hamid, Umar [4 ]
机构
[1] Northwestern Polytech Univ, Sch Software, Xian, Peoples R China
[2] CESAT, Islamabad, Pakistan
[3] Higher Coll Technol, Fac Comp & Informat Sci, Fujairah, U Arab Emirates
[4] Comsats Univ, Islamabad, Pakistan
关键词
Underwater acoustics; Ship classification; Underwater dataset; Deep convolutional network; TARGET CLASSIFICATION; RADIATED NOISE; EXTRACTION; FEATURES;
D O I
10.1016/j.eswa.2021.115270
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Underwater acoustic classification is a challenging problem because of presence of high background noise and complex sound propagation patterns in the sea environment. Various algorithms proposed in last few years used own privately collected datasets for design and validation. Such data is not publicly available. To conduct research in this field, there is a dire need of publicly available dataset. To bridge this gap, we construct and present an underwater acoustic dataset, named DeepShip, which consists of 47 h and 4 min of real world un-derwater recordings of 265 different ships belong to four classes. The proposed dataset includes recording from throughout the year with different sea states and noise levels. The presented dataset will not only help to evaluate the performance of existing algorithms but it shall also benefit the research community in future. Using the proposed dataset, we also conducted a comprehensive study of various machine learning and deep learning algorithms on six time-frequency based extracted features. In addition, we propose a novel separable convo-lution based autoencoder network for better classification accuracy. Experiments results, which are compared based on classification accuracy, precision, recall, f1-score, and analyzed by using paired sampled statistical t -test, show that the proposed network achieves classification accuracy of 77.53% using CQT feature, which is better than as achieved by other methods.
引用
收藏
页数:12
相关论文
共 44 条
[1]   Underwater target classification in changing environments using an adaptive feature mapping [J].
Azimi-Sadjadi, MR ;
Yao, D ;
Jamshidi, AA ;
Dobeck, GJ .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (05) :1099-1111
[2]   Ship classification using nonlinear features of radiated sound: An approach based on empirical mode decomposition [J].
Bao, Fei ;
Li, Chen ;
Wang, Xinlong ;
Wang, Qingfu ;
Du, Shuanping .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2010, 128 (01) :206-214
[3]   Convolutional Neural Network With Second-Order Pooling for Underwater Target Classification [J].
Cao, Xu ;
Togneri, Roberto ;
Zhang, Xiaomin ;
Yu, Yang .
IEEE SENSORS JOURNAL, 2019, 19 (08) :3058-3066
[4]  
Carbonera JL, 2019, INT J ADV COMPUT SC, V10, P1
[5]   Acoustic Classification of Surface and Underwater Vessels in the Ocean Using Supervised Machine Learning [J].
Choi, Jongkwon ;
Choo, Youngmin ;
Lee, Keunhwa .
SENSORS, 2019, 19 (16)
[6]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[7]   Marine vessel classification based on passive sonar data: the cepstrum-based approach [J].
Das, Arnab ;
Kumar, Arun ;
Bahl, Rajendar .
IET RADAR SONAR AND NAVIGATION, 2013, 7 (01) :87-93
[8]   The Effects of Ship Noise on Marine Mammals - A Review [J].
Erbe, Christine ;
Marley, Sarah A. ;
Schoeman, Renee P. ;
Smith, Joshua N. ;
Trigg, Leah E. ;
Embling, Clare Beth .
FRONTIERS IN MARINE SCIENCE, 2019, 6
[9]   Preprocessing passive sonar signals for neural classification [J].
Filho, W. S. ;
de Seixas, J. M. ;
de Moura, N. N. .
IET RADAR SONAR AND NAVIGATION, 2011, 5 (06) :605-612
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778