Mish-DCTGAN based combined image super-resolution and deblurring approach for blurry license plates

被引:0
作者
Pattanaik A. [1 ]
Balabantaray R.C. [1 ]
机构
[1] International Institute of Information Technology Bhubaneswar, Bhubaneswar
关键词
Deblurring; Generative adversarial network; License plate; Mish activation function;
D O I
10.1007/s41870-023-01322-7
中图分类号
学科分类号
摘要
Nowadays, there is a growing desire for high definition images with fine textures, yet images taken in natural settings frequently suffer from sophisticated fuzzy artifacts. Due to the fact that these obtrusive abnormalities significantly reduce the visual quality of images, deblurring methods have been developed from a variety of perspectives. Blind motion Deblurring is a fundamental and difficult challenge in image processing and computer vision. It attempts to restore a clear image from a blurred version, despite the fact that it has no knowledge of the blurring process. Numerous existing methods are employed to address these types of challenges, but they are incapable of handling the high frequency characteristics present in natural images, as real-world images are frequently low resolution and blurred in various ways. This article presents a technique for recognising vehicle licence plates captured by surveillance cameras under natural circumstances, which is important in the domain of intelligent transportation systems. These observed plate images are frequently of low resolution and suffer from considerable edge loss, posing a significant barrier to existing blind deblurring algorithms. We present a discrete cosine transform (DCT) generative adversarial network (DCTGAN) based approach with a Mish activation function called Mish-DCTGAN to jointly process image super-resolution and non-uniform deblurring. We evaluated our proposed approach to licence plate (LP) datasets and compared the results with other existing methodologies. Mish-DCTGAN achieves the best performance in terms of PSNR and SSIM, as demonstrated by our testing results. © 2023, The Author(s), under exclusive licence to Bharati Vidyapeeth's Institute of Computer Applications and Management.
引用
收藏
页码:2767 / 2775
页数:8
相关论文
共 43 条
  • [1] Ayan C., A neural approach to blind motion deblurring, In European Conference on Computer Vision., pp. 221-235, (2016)
  • [2] Su S., Delbracio M., Wang J., Sapiro G., Heidrich W., Wang O., Deep video deblurring for hand-held cameras, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition., pp. 1279-1288, (2017)
  • [3] Chang X., Huang P.Y., Shen Y.D., Liang X., Yang Y., Hauptmann A.G., Rcaa: Relational context-aware agents for person search, In Proceedings of the European Conference on Computer Vision (ECCV)., pp. 84-100, (2018)
  • [4] Liu A.A., Nie W.Z., Gao Y., Su Y.T., Multi-modal clique-graph matching for view-based 3d model retrieval, IEEE Trans Image Process, 25, 5, pp. 2103-2116, (2016)
  • [5] Mao X., Shen C., Yang Y.B., Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections, Adv Neural Inf Process Syst, 29, pp. 401-409, (2016)
  • [6] Liu J., Zhai G., Liu A., Yang X., Zhao X., Chen C.W., IPAD: intensity potential for adaptive de-quantization, IEEE Trans Image Process, 27, 10, pp. 4860-4872, (2018)
  • [7] Liu A.A., Su Y.T., Nie W.Z., Kankanhalli M., Hierarchical clustering multi-task learning for joint human action grouping and recognition, IEEE Trans Pattern Anal Mach Intell, 39, 1, pp. 102-114, (2016)
  • [8] Chang X., Ma Z., Yang Y., Zeng Z., Hauptmann A.G., Bi-level semantic representation analysis for multimedia event detection, IEEE Trans Cybern, 47, 5, pp. 1180-1197, (2016)
  • [9] Chang X., Yang Y., Xing E., Yu Y., Complex event detection using semantic saliency and nearly-isotonic SVM, In International Conference on Machine Learning., pp. 1348-1357, (2015)
  • [10] Ma S., Liu J., Wen Chen C., A-lamp: Adaptive layout-aware multi-patch deep convolutional neural network for photo aesthetic assessment, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition., pp. 4535-4544, (2017)