Classification-Driven Discrete Neural Representation Learning for Semantic Communications

被引:0
作者
Hua, Wenhui [1 ]
Xiong, Longhui [2 ]
Liu, Sicong [2 ,3 ,4 ]
Chen, Lingyu [5 ]
Hong, Xuemin [2 ]
Mota, Joao F. C. [4 ]
Cheng, Xiang [5 ]
机构
[1] Xiamen Univ, Sch Elect Sci & Engn & Natl Local Joint Engn Res C, Xiamen 361000, Peoples R China
[2] Xiamen Univ, Sch Informat & Natl Local Joint Engn Res Ctr Nav &, Xiamen 361000, Peoples R China
[3] Southeast Univ, Natl Mobile Commun Res Lab, Nanjing 210096, Peoples R China
[4] Heriot Watt Univ, Sch Engn & Phys Sci, Edinburgh EH14 4AS, Scotland
[5] Peking Univ, Sch Elect, State Key Lab Adv Opt Commun Syst & Networks, Beijing 100871, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 09期
基金
中国国家自然科学基金;
关键词
Data compression; distributed detection; image classification; image representations; neural networks; quantization; ARTIFICIAL-INTELLIGENCE; INTERNET;
D O I
10.1109/JIOT.2024.3354312
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Semantic communications is a key enabler of the Internet of Things (IoT). By focusing on the semantic meaning of data rather than bit-level recovery, it allows intelligent agents to communicate necessary information at much lower rates. A promising technique for semantic communications is discrete neural representation learning (DNRL). The main idea is to learn discrete symbols from low-level, high-dimensional sensory data, such that each symbol is grounded to a meaningful pattern in the sensory domain. This article proposes a DNRL scheme that integrates three mechanisms into a coherent framework: 1) contrastive learning; 2) sparse coding; and 3) neural index quantization. The proposed scheme is applied to public image data sets for lossy image compression with a downstream classification task. Results show that the proposed approach produces a highly compact continuous latent representation and a semantic discrete representation, with marginal degradation to the classification accuracy. The interpretability and consistency of the learned subsymbolic discrete representations are validated by experiments of neural-net dissection, neural-net visualization, and MaxAmp-K classification test, a concept that we propose to evaluate classification performance of extremely compressed signals. Finally, the discrete representations are shown to be useful in rate-adaptive distributed sensing applications at the low-to-medium signal-to-noise ratios (SNRs).
引用
收藏
页码:16061 / 16073
页数:13
相关论文
共 65 条
[1]   Cleaning Noisy Labels by Negative Ensemble Learning for Source-Free Unsupervised Domain Adaptation [J].
Ahmed, Waqar ;
Morerio, Pietro ;
Murino, Vittorio .
2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, :356-365
[2]   The JPEG AI Standard: Providing Efficient Human and Machine Visual Data Consumption [J].
Ascenso, Joao ;
Alshina, Elena ;
Ebrahimi, Touradj .
IEEE MULTIMEDIA, 2023, 30 (01) :100-111
[3]  
Bao J., 2011, P IEEE NETW SCI WORK, P110
[4]  
Bar-Hillel Y., 1953, The British Journal for the Philosophy of Science, V4, P147, DOI 10.1093/bjps/IV.14.147
[5]   Understanding the role of individual units in a deep neural network [J].
Bau, David ;
Zhu, Jun-Yan ;
Strobelt, Hendrik ;
Lapedriza, Agata ;
Zhou, Bolei ;
Torralba, Antonio .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (48) :30071-30078
[6]   Enhancing Sparsity by Reweighted l1 Minimization [J].
Candes, Emmanuel J. ;
Wakin, Michael B. ;
Boyd, Stephen P. .
JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS, 2008, 14 (5-6) :877-905
[7]   Propagating Asymptotic-Estimated Gradients for Low Bitwidth Quantized Neural Networks [J].
Chen, Jun ;
Liu, Yong ;
Zhang, Hao ;
Hou, Shengnan ;
Yang, Jian .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (04) :848-859
[8]  
Chen T, 2020, PR MACH LEARN RES, V119
[9]   A New MIMO HF Data Link: Designing for High Data Rates and Backwards Compatibility [J].
Daniels, Robert C. ;
Peters, Steven W. .
2013 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2013), 2013, :1256-1261
[10]  
Farshbafan M. K., 2022, ARXIV