Efficient Halftoning via Deep Reinforcement Learning

被引:2
|
作者
Jiang, Haitian [1 ]
Xiong, Dongliang [1 ]
Jiang, Xiaowen [1 ]
Ding, Li [2 ]
Chen, Liang [2 ]
Huang, Kai [1 ]
机构
[1] Zhejiang Univ, Inst VLSI Design, Hangzhou 310058, Peoples R China
[2] Apex Microelect Co Ltd, Zhuhai 519075, Peoples R China
关键词
Measurement; Convolutional neural networks; Training; Reinforcement learning; Deep learning; Visualization; Extensibility; Halftoning; dithering; deep learning; reinforcement learning; blue noise; ERROR-DIFFUSION; VISIBILITY;
D O I
10.1109/TIP.2023.3318937
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Halftoning aims to reproduce a continuous-tone image with pixels whose intensities are constrained to two discrete levels. This technique has been deployed on every printer, and the majority of them adopt fast methods (e.g., ordered dithering, error diffusion) that fail to render structural details, which determine halftone's quality. Other prior methods of pursuing visual pleasure by searching for the optimal halftone solution, on the contrary, suffer from their high computational cost. In this paper, we propose a fast and structure-aware halftoning method via a data-driven approach. Specifically, we formulate halftoning as a reinforcement learning problem, in which each binary pixel's value is regarded as an action chosen by a virtual agent with a shared fully convolutional neural network (CNN) policy. In the offline phase, an effective gradient estimator is utilized to train the agents in producing high-quality halftones in one action step. Then, halftones can be generated online by one fast CNN inference. Besides, we propose a novel anisotropy suppressing loss function, which brings the desirable blue-noise property. Finally, we find that optimizing SSIM could result in holes in flat areas, which can be avoided by weighting the metric with the contone's contrast map. Experiments show that our framework can effectively train a light-weight CNN, which is 15x faster than previous structure-aware methods, to generate blue-noise halftones with satisfactory visual quality. We also present a prototype of deep multitoning to demonstrate the extensibility of our method.
引用
收藏
页码:5494 / 5508
页数:15
相关论文
共 50 条
  • [41] Deep Reinforcement Learning for Semisupervised Hyperspectral Band Selection
    Feng, Jie
    Li, Di
    Gu, Jing
    Cao, Xianghai
    Shang, Ronghua
    Zhang, Xiangrong
    Jiao, Licheng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [42] Heterogeneous Attentions for Solving Pickup and Delivery Problem via Deep Reinforcement Learning
    Li, Jingwen
    Xin, Liang
    Cao, Zhiguang
    Lim, Andrew
    Song, Wen
    Zhang, Jie
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (03) : 2306 - 2315
  • [43] TransMap: An Efficient CGRA Mapping Framework via Transformer and Deep Reinforcement Learning
    Li, Jingyuan
    Dai, Yuan
    Hu, Yihan
    Li, Jiangnan
    Yin, Wenbo
    Tao, Jun
    Wang, Lingli
    2024 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IPDPSW 2024, 2024, : 626 - 633
  • [44] Route Optimization via Environment-Aware Deep Network and Reinforcement Learning
    Guo, Pengzhan
    Xiao, Keli
    Ye, Zeyang
    Zhu, Wei
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2021, 12 (06)
  • [45] Deep reinforcement learning for UAV swarm rendezvous behavior
    Zhang, Yaozhong
    Li, Yike
    Wu, Zhuoran
    Xu, Jialin
    JOURNAL OF SYSTEMS ENGINEERING AND ELECTRONICS, 2023, 34 (02) : 360 - 373
  • [46] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Junyu Xuan
    Jie Lu
    Zheng Yan
    Guangquan Zhang
    International Journal of Computational Intelligence Systems, 2018, 12 : 164 - 171
  • [47] Deep learning, reinforcement learning, and world models
    Matsuo, Yutaka
    LeCun, Yann
    Sahani, Maneesh
    Precup, Doina
    Silver, David
    Sugiyama, Masashi
    Uchibe, Eiji
    Morimoto, Jun
    NEURAL NETWORKS, 2022, 152 : 267 - 275
  • [48] Deep learning and reinforcement learning approach on microgrid
    Chandrasekaran, Kumar
    Kandasamy, Prabaakaran
    Ramanathan, Srividhya
    INTERNATIONAL TRANSACTIONS ON ELECTRICAL ENERGY SYSTEMS, 2020, 30 (10):
  • [49] An adaptive testing item selection strategy via a deep reinforcement learning approach
    Wang, Pujue
    Liu, Hongyun
    Xu, Mingqi
    BEHAVIOR RESEARCH METHODS, 2024, 56 (08) : 8695 - 8714
  • [50] Transfer Learning in Deep Reinforcement Learning: A Survey
    Zhu, Zhuangdi
    Lin, Kaixiang
    Jain, Anil K.
    Zhou, Jiayu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) : 13344 - 13362