Learning Adaptive Patch Generators for Mask-Robust Image Inpainting

被引:9
作者
Sun, Hongyi [1 ,2 ]
Li, Wanhua [1 ,2 ]
Duan, Yueqi [1 ,2 ]
Zhou, Jie [1 ,2 ]
Lu, Jiwen [1 ,2 ]
机构
[1] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRist, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Dept Automation, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
Image inpainting; mask-robust agent; adaptive patch generators; FILLING-IN;
D O I
10.1109/TMM.2022.3174413
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a Mask-Robust Inpainting Network (MRIN) approach to recover the masked areas of an image. Most existing methods learn a single model for image inpainting, under a basic assumption that all masks are from the same type. However, we discover that the masks are usually complex and exhibit various shapes and sizes at different locations of an image, where a single model cannot fully capture the large domain gap across different masks. To address this, we learn to decompose a complex mask area into several basic types and recover the damaged image in a patch-wise manner with a type-specific generator. More specifically, our MRIN consists of a mask-robust agent and an adaptive patch generative network. The mask-robust agent contains a mask selector and a patch locator, which generates mask attention maps to select a patch at each step. Based on the predicted mask attention maps, the adaptive patch generative network inpaints the selected patch with the generators bank, so that it sequentially inpaints each patch with different patch generators according to its mask type. Extensive experiments demonstrate that our approach outperforms most state-of-the-art approaches on the Place2, CelebA, and Paris Street View datasets.
引用
收藏
页码:4240 / 4252
页数:13
相关论文
共 50 条
  • [21] Parallel adaptive guidance network for image inpainting
    Jiang, Jinyang
    Dong, Xiucheng
    Li, Tao
    Zhang, Fan
    Qian, Hongjiang
    Chen, Guifang
    APPLIED INTELLIGENCE, 2023, 53 (01) : 1162 - 1179
  • [22] Deep learning for image inpainting: A survey
    Xiang, Hanyu
    Zou, Qin
    Nawaz, Muhammad Ali
    Huang, Xianfeng
    Zhang, Fan
    Yu, Hongkai
    PATTERN RECOGNITION, 2023, 134
  • [23] Study Of Image Inpainting Based On Learning
    Liu, Huaming
    Wang, Weilan
    Bi, Xuehui
    INTERNATIONAL MULTICONFERENCE OF ENGINEERS AND COMPUTER SCIENTISTS (IMECS 2010), VOLS I-III, 2010, : 1442 - 1445
  • [24] Parallel adaptive guidance network for image inpainting
    Jinyang Jiang
    Xiucheng Dong
    Tao Li
    Fan Zhang
    Hongjiang Qian
    Guifang Chen
    Applied Intelligence, 2023, 53 : 1162 - 1179
  • [25] Patch-guided facial image inpainting by shape propagation
    Timothy K. SHIH
    Nick C. TANG
    Journal of Zhejiang University(Science A:An International Applied Physics & Engineering Journal), 2009, 10 (02) : 232 - 238
  • [26] RECONSTRUCTION OF IMAGES WITH EXEMPLAR BASED IMAGE INPAINTING AND PATCH PROPAGATION
    Ishi, Manoj S.
    Singh, Lokesh
    Agrawal, Manish
    2014 INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND EMBEDDED SYSTEMS (ICICES), 2014,
  • [27] Image Inpainting with Group Based Sparse Representation using Self Adaptive Dictionary Learning
    Rao, T. J. V. Subrahmanyeswara
    Rao, M. Venu Gopala
    Aswini, T. V. N. L.
    2015 INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION ENGINEERING SYSTEMS (SPACES), 2015, : 301 - 305
  • [28] AFTLNet: An efficient adaptive forgery traces learning network for deep image inpainting localization
    Ding, Xiangling
    Deng, Yingqian
    Zhao, Yulin
    Zhu, Wenyi
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2024, 84
  • [29] Pixel and Patch Reordering for Fast Patch Selection in Exemplar-Based Image Inpainting
    Kim, Baeksop
    Kim, Jiseong
    So, Jungmin
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2013, E96D (12) : 2892 - 2895
  • [30] Patch-guided facial image inpainting by shape propagation
    Yue-ting Zhuang
    Yu-shun Wang
    Timothy K. Shih
    Nick C. Tang
    Journal of Zhejiang University-SCIENCE A, 2009, 10 : 232 - 238