Steganalysis of AMR Speech Stream Based on Multi-Domain Information Fusion

被引:1
作者
Guo, Chuanpeng [1 ]
Yang, Wei [1 ]
Huang, Liusheng [1 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Steganography; Speech coding; Speech processing; Correlation; Random variables; Redundancy; AMR Steganalysis; Markov Chain; Bayesian Network; Feature Selection; Compressed Speech; STEGANOGRAPHY; NETWORKS; SCHEME;
D O I
10.1109/TASLP.2024.3408033
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Traditional machine learning-based steganalysis methods on compressed speech in VoIP applications have achieved great success. However, in these methods, there is a dilemma between the effectiveness of modeling the steganographic carrier and the high dimensionality of extracted features. Especially for small-sized and low embedding rate samples, most existing methods do not perform well enough. To deal with this issue, we present MDoIF- an Adaptive Multi-Rate (AMR) steganalysis of compressed speech based on multi-domain information fusion. In order to fully extract the information reflecting the change of carrier correlation before and after VoIP steganography, we construct a Bayesian network with FCB parameters in compressed speech as the vertices, and quantify link strength between codebook parameters. On this basis, we design a multi-domain feature extraction algorithm, supplemented by an information-theoretic measure-based feature selection algorithm for dimensionality reduction, which can significantly improve the performance of MDoIF. To evaluate the performance of our method, we conduct comprehensive experiments on MDoIF and existing models. Experimental results show that MDoIF performs effectively on various AMR steganalysis tasks with excellent detection accuracy. Particularly for small-sized and low embedding rate samples, MDoIF surpasses the state-of-the-art methods.
引用
收藏
页码:4077 / 4090
页数:14
相关论文
共 50 条
[31]   Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition [J].
Tao, Huawei ;
Geng, Lei ;
Shan, Shuai ;
Mai, Jingchao ;
Fu, Hongliang .
ENTROPY, 2022, 24 (08)
[32]   Speech emotion recognition based on multi‐feature and multi‐lingual fusion [J].
Chunyi Wang ;
Ying Ren ;
Na Zhang ;
Fuwei Cui ;
Shiying Luo .
Multimedia Tools and Applications, 2022, 81 :4897-4907
[33]   Intelligent Bearing Fault Diagnosis Based on Feature Fusion of One-Dimensional Dilated CNN and Multi-Domain Signal Processing [J].
Dong, Kaitai ;
Lotfipoor, Ashkan .
SENSORS, 2023, 23 (12)
[34]   Blockchain-based cross-domain authentication in a multi-domain Internet of drones environment [J].
Karmegam, Arivarasan ;
Tomar, Ashish ;
Tripathi, Sachin .
JOURNAL OF SUPERCOMPUTING, 2024, 80 (19) :27095-27122
[35]   Differentiated Embedded Pilot Assisted Automatic Modulation Classification for OTFS System: A Multi-Domain Fusion Approach [J].
Liu, Zhenkai ;
Zhang, Bibo ;
Luo, Hao ;
He, Hao .
SENSORS, 2025, 25 (14)
[36]   Underwater target material classification method based on multi-domain feature extraction [J].
Han N. ;
Wang Y. .
Dongnan Daxue Xuebao (Ziran Kexue Ban)/Journal of Southeast University (Natural Science Edition), 2024, 54 (03) :781-788
[37]   EEG multi-domain feature transfer based on sparse regularized Tucker decomposition [J].
Gao, Yunyuan ;
Zhang, Congrui ;
Huang, Jincheng ;
Meng, Ming .
COGNITIVE NEURODYNAMICS, 2024, 18 (01) :185-197
[38]   Path Protection with Hierarchical PCE in GMPLS-Based Multi-domain WSONs [J].
Giorgetti, A. ;
Fazel, S. ;
Paolucci, F. ;
Cugini, F. ;
Castoldi, P. .
IEEE COMMUNICATIONS LETTERS, 2013, 17 (06) :1268-1271
[39]   Classification of Speech Signal based on Feature Fusion in Time and Frequency Domain [J].
Kristomo, Domy ;
Nugroho, Fx Henry .
2021 4TH INTERNATIONAL SEMINAR ON RESEARCH OF INFORMATION TECHNOLOGY AND INTELLIGENT SYSTEMS (ISRITI 2021), 2020,
[40]   Audio steganalysis using multi-scale feature fusion-based attention neural network [J].
Peng, Jinghui ;
Liao, Yi ;
Tang, Shanyu .
IET COMMUNICATIONS, 2025, 19 (01)