A Novel Two-stage Separable Deep Learning Framework for Practical Blind Watermarking

被引:102
作者
Liu, Yang [1 ]
Guo, Mengxi [1 ]
Zhang, Jian [1 ,3 ]
Zhu, Yuesheng [1 ]
Xie, Xiaodong [2 ]
机构
[1] Peking Univ, Sch Elect & Comp Engn, Shenzhen, Peoples R China
[2] Peking Univ, Sch Elect Engn & Comp Sci, Beijing, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
来源
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19) | 2019年
关键词
robust blind watermarking; neural networks; black-box noise;
D O I
10.1145/3343031.3351025
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
As a vital copyright protection technology, blind watermarking based on deep learning with an end-to-end encoder-decoder architecture has been recently proposed. Although the one-stage end-to-end training (OET) facilitates the joint learning of encoder and decoder, the noise attack must be simulated in a differentiable way, which is not always applicable in practice. In addition, OET often encounters the problems of converging slowly and tends to degrade the quality of watermarked images under noise attack. In order to address the above problems and improve the practicability and robustness of algorithms, this paper proposes a novel two-stage separable deep learning (TSDL) framework for practical blind watermarking. Precisely, the TSDL framework is composed of noise-free end-to-end adversary training (FEAT) and noise-aware decoder-only training (ADOT). A redundant multi-layer feature encoding network is developed in FEAT to obtain the encoder, while ADOT is used to get the decoder which is robust and practical enough to accept any type of noise. Extensive experiments demonstrate that the proposed framework not only exhibits better stability, greater performance and faster convergence speed compared with current state-of-the-art OET methods, but is also able to resist high-intensity noises that have not been tested in previous works.
引用
收藏
页码:1509 / 1517
页数:9
相关论文
共 42 条
  • [1] [Anonymous], P 10 ACM INT C MULT
  • [2] [Anonymous], IEEE MULTIMEDIA
  • [3] [Anonymous], 2015, 3 INT C LEARNING REP
  • [4] [Anonymous], 2014, CORR
  • [5] [Anonymous], 2018, ECCV
  • [6] [Anonymous], P SPIE INT SOC OPTIC
  • [7] [Anonymous], 2016, ICLR
  • [8] [Anonymous], P 6 ACM INT C MULT
  • [9] [Anonymous], AEU INT J ELECT COMM
  • [10] [Anonymous], 2016, ADV NEURAL INFORM PR