Steganographic visual story with mutual-perceived joint attention

被引:4
作者
Guo, Yanyang [1 ]
Wu, Hanzhou [1 ,2 ]
Zhang, Xinpeng [1 ,2 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[2] Shanghai Inst Adv Commun & Data Sci, Shanghai 200444, Peoples R China
基金
中国国家自然科学基金;
关键词
Steganography; Social networks; Recurrent neural network; Visual story; IMAGE DESCRIPTION;
D O I
10.1186/s13640-020-00543-1
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Social media plays an increasingly important role in providing information and social support to users. Due to the easy dissemination of content, as well as difficulty to track on the social network, we are motivated to study the way of concealing sensitive messages in this channel with high confidentiality. In this paper, we design a steganographic visual stories generation model that enables users to automatically post stego status on social media without any direct user intervention and use the mutual-perceived joint attention (MPJA) to maintain the imperceptibility of stego text. We demonstrate our approach on the visual storytelling (VIST) dataset and show that it yields high-quality steganographic texts. Since the proposed work realizes steganography by auto-generating visual story using deep learning, it enables us to move steganography to the real-world online social networks with intelligent steganographic bots.
引用
收藏
页数:14
相关论文
共 3 条
  • [1] Steganographic visual story with mutual-perceived joint attention
    Yanyang Guo
    Hanzhou Wu
    Xinpeng Zhang
    EURASIP Journal on Image and Video Processing, 2021
  • [2] A JOINT MODEL FOR ACTION LOCALIZATION AND CLASSIFICATION IN UNTRIMMED VIDEO WITH VISUAL ATTENTION
    Li, Weimian
    Wang, Wenmin
    Chen, Xiongtao
    Wang, Jinzhuo
    Li, Ge
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2017, : 619 - 624
  • [3] Joint Visual-Textual Sentiment Analysis Based on Cross-Modality Attention Mechanism
    Zhu, Xuelin
    Cao, Biwei
    Xu, Shuai
    Liu, Bo
    Cao, Jiuxin
    MULTIMEDIA MODELING (MMM 2019), PT I, 2019, 11295 : 264 - 276