OFPF-MEF: An Optical Flow Guided Dynamic Multi-Exposure Image Fusion Network With Progressive Frequencies Learning

被引:1
作者
Hong, Wenhui [1 ]
Zhang, Hao [1 ]
Ma, Jiayi [1 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Image color analysis; Optical imaging; Feature extraction; Optical distortion; Optical fiber networks; Image fusion; Adaptive optics; multi-exposure; optical flow; dynamic; swin transformer; DENSE SIFT;
D O I
10.1109/TMM.2024.3379883
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a novel optical flow guided network with progressive frequencies learning, achieving promising dynamic multi-exposure image fusion. Specifically, the proposed method consists of the optical flow alignment block and the progressive frequencies fusion block, where the former is to alleviate the ghost caused by the camera and object motions, and the latter is dedicated to synthesizing the desired color and details. First, in the optical flow alignment block, we estimate the optical flow between two source images and utilize the deformable convolutional network to achieve spatial alignment guided by the estimated optical flow. Second, in the progressive frequencies fusion block, color correction and details preservation are implemented in two gradual phases, i.e., low and high frequencies. For the low frequency fusion phase, we combine the convolutional neural network and swin transformer to capture local and global features, so as to consider the color correction from a complete perspective. For the high frequency fusion phase, an attention gate is designed to evaluate the important details from source images, bringing fewer artifacts and halos. Finally, the low and high frequency fusion phases are connected through a residual mapping strategy to generate a desired image with reasonable colors and rich details. Extensive experiments on publicly available datasets reveal that our method outperforms the state-of-the-art for both static and dynamic scenarios. Moreover, our method is superior in running efficiency over most of the state-of-the-art methods.
引用
收藏
页码:8581 / 8595
页数:15
相关论文
共 52 条
[1]  
[Anonymous], 2022, CAA J. Automatica Sinica, V9, P1200
[2]   Variational Approach for the Fusion of Exposure Bracketed Pairs [J].
Bertalmio, Marcelo ;
Levine, Stacey .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (02) :712-723
[3]   Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images [J].
Cai, Jianrui ;
Gu, Shuhang ;
Zhang, Lei .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (04) :2049-2062
[4]  
Chen SY, 2020, INT CONF ACOUST SPEE, P1464, DOI [10.1109/ICASSP40776.2020.9053765, 10.1109/icassp40776.2020.9053765]
[5]  
Gatys A. S., 2016, J. Vis., V16
[6]  
Gelfand N., 2010, ACM INT C MULT, P823
[7]   Optimal HDR Reconstruction with Linear Digital Cameras [J].
Granados, Miguel ;
Ajdin, Boris ;
Wand, Michael ;
Theobalt, Christian ;
Seidel, Hans-Peter ;
Lensch, Hendrik P. A. .
2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, :215-222
[8]   Determining the camera response from images: What is knowable? [J].
Grossberg, MD ;
Nayar, SK .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2003, 25 (11) :1455-1467
[9]   Multi-exposure image fusion via deep perceptual enhancement [J].
Han, Dong ;
Li, Liang ;
Guo, Xiaojie ;
Ma, Jiayi .
INFORMATION FUSION, 2022, 79 :248-262
[10]   Ghost-free multi exposure image fusion technique using dense SIFT descriptor and guided filter [J].
Hayat, Naila ;
Imran, Muhammad .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 62 :295-308