OFPF-MEF: An Optical Flow Guided Dynamic Multi-Exposure Image Fusion Network With Progressive Frequencies Learning

被引:1
作者
Hong, Wenhui [1 ]
Zhang, Hao [1 ]
Ma, Jiayi [1 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Image color analysis; Optical imaging; Feature extraction; Optical distortion; Optical fiber networks; Image fusion; Adaptive optics; multi-exposure; optical flow; dynamic; swin transformer; DENSE SIFT;
D O I
10.1109/TMM.2024.3379883
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a novel optical flow guided network with progressive frequencies learning, achieving promising dynamic multi-exposure image fusion. Specifically, the proposed method consists of the optical flow alignment block and the progressive frequencies fusion block, where the former is to alleviate the ghost caused by the camera and object motions, and the latter is dedicated to synthesizing the desired color and details. First, in the optical flow alignment block, we estimate the optical flow between two source images and utilize the deformable convolutional network to achieve spatial alignment guided by the estimated optical flow. Second, in the progressive frequencies fusion block, color correction and details preservation are implemented in two gradual phases, i.e., low and high frequencies. For the low frequency fusion phase, we combine the convolutional neural network and swin transformer to capture local and global features, so as to consider the color correction from a complete perspective. For the high frequency fusion phase, an attention gate is designed to evaluate the important details from source images, bringing fewer artifacts and halos. Finally, the low and high frequency fusion phases are connected through a residual mapping strategy to generate a desired image with reasonable colors and rich details. Extensive experiments on publicly available datasets reveal that our method outperforms the state-of-the-art for both static and dynamic scenarios. Moreover, our method is superior in running efficiency over most of the state-of-the-art methods.
引用
收藏
页码:8581 / 8595
页数:15
相关论文
共 52 条
[31]  
Qu LH, 2022, AAAI CONF ARTIF INTE, P2126
[32]   Rethinking multi-exposure image fusion with extreme and diverse exposure levels: A robust framework based on Fourier transform and contrastive learning [J].
Qu, Linhao ;
Liu, Shaolei ;
Wang, Manning ;
Song, Zhijian .
INFORMATION FUSION, 2023, 92 :389-403
[33]   Generalized Random Walks for Fusion of Multi-Exposure Images [J].
Shen, Rui ;
Cheng, Irene ;
Shi, Jianbo ;
Basu, Anup .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2011, 20 (12) :3634-3646
[34]   Probabilistic Exposure Fusion [J].
Song, Mingli ;
Tao, Dacheng ;
Chen, Chun ;
Bu, Jiajun ;
Luo, Jiebo ;
Zhang, Chengqi .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2012, 21 (01) :341-357
[35]   SuperFusion: A Versatile Image Registration and Fusion Network with Semantic Awareness [J].
Tang, Linfeng ;
Deng, Yuxin ;
Ma, Yong ;
Huang, Jun ;
Ma, Jiayi .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (12) :2121-2137
[36]   RAFT: Recurrent All-Pairs Field Transforms for Optical Flow [J].
Teed, Zachary ;
Deng, Jia .
COMPUTER VISION - ECCV 2020, PT II, 2020, 12347 :402-419
[37]   Ghosting-free multi-exposure image fusion for static and dynamic scenes [J].
Ulucan, Oguzhan ;
Ulucan, Diclehan ;
Turkan, Mehmet .
SIGNAL PROCESSING, 2023, 202
[38]  
Vaswani A., 2017, ADV NEURAL INFPROCES, V30, P11
[39]  
Venkatanath N, 2015, NATL CONF COMMUN
[40]   Detail-Enhanced Multi-Scale Exposure Fusion in YUV Color Space [J].
Wang, Qiantong ;
Chen, Weihai ;
Wu, Xingming ;
Li, Zhengguo .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (08) :2418-2429