High-Precision Reversible Data Hiding Predictor: UCANet

被引:0
作者
Rao, Haiyang [1 ]
Weng, Shaowei [2 ]
Yu, Lifang [3 ]
Li, Li [4 ]
Cao, Gang [5 ]
机构
[1] Fujian Univ Technol, Sch Elect Elect Engn & Phys, Fuzhou 350118, Peoples R China
[2] Fujian Univ Technol, Sch Comp Sci & Math, Fuzhou 350118, Peoples R China
[3] Beijing Inst Graph Commun, Dept Informat Engn, Beijing 100026, Peoples R China
[4] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, Hangzhou 310018, Peoples R China
[5] Commun Univ China, Sch Comp & Cyber Sci, Beijing 100024, Peoples R China
关键词
Convolution; Feature extraction; Standards; Training; Image reconstruction; Fuses; Accuracy; Reversible data hiding; deep learning; U-Net; predictor; WATERMARKING; EXPANSION;
D O I
10.1109/LSP.2024.3447215
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Existing convolutional neural network-based reversible data hiding (RDH) predictors typically stack the standard convolution blocks with stride 1 for feature extraction, and keep the sizes of input and output feature maps unchanged through padding. This suggests that only a limited range of contextual spatial information is obtained. To remedy this problem above, a U-Net-like RDH predictor named UCANet is proposed in this paper to capture rich multi-scale contextual information by gradually downsampling feature maps. To fuse two feature maps at different levels along the channel dimension, we put forward the channel adaptive attention (CAA). By merely combining cheap pointwise convolution operations, CAA achieves the integration of non-linear and linear features as well as implicitly enhances channel dimensionality with low computational burden, thereby effectively enriching the expression of the channel information. The design of UCANet considers the characteristics of RDH from two aspects. On the one hand, instead of maxpooling or average pooling commonly used for downsampling, a stride-2 convolution block that can adaptively adjust the weights of convolution kernels and select useful information is utilized to downsample feature maps. On the other hand, UCANet removes the batch normalization layers to avoid their influence on the distribution of feature maps, which helps to strengthen the network's prediction capability. Extensive experiments also demonstrate that the proposed UCANet achieves better prediction performance, compared to several state-of-the-art methods.
引用
收藏
页码:2155 / 2159
页数:5
相关论文
共 35 条
  • [1] [Anonymous], 1999, KODAK IMAGE DATABASE
  • [2] Bas T., 2007, Bows2
  • [3] Lossless generalized-LSB data embedding
    Celik, MU
    Sharma, G
    Tekalp, AM
    Saber, E
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2005, 14 (02) : 253 - 266
  • [4] High-Fidelity Reversible Data Hiding Using Directionally Enclosed Prediction
    Chen, Haishan
    Ni, Jiangqun
    Hong, Wien
    Chen, Tung-Shou
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (05) : 574 - 578
  • [5] Very fast watermarking by reversible contrast mapping
    Coltuc, Dinu
    Chassery, Jean-Marc
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2007, 14 (04) : 255 - 258
  • [6] Deep Learning-based Estimation for Multitarget Radar Detection
    Delamou, Mamady
    Bazzi, Ahmad
    Chafii, Marwa
    Amhoud, El Mehdi
    [J]. 2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [7] Reversible data hiding based on multiple histograms modification and deep neural networks
    Hou, Jiacheng
    Ou, Bo
    Tian, Huawei
    Qin, Zheng
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2021, 92
  • [8] CNN Prediction Based Reversible Data Hiding
    Hu, Runwen
    Xiang, Shijun
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 464 - 468
  • [9] Ioffe S, 2015, PR MACH LEARN RES, V37, P448
  • [10] Kingma D. P., 2014, 3 INT C LEARNING REP, P1