Optical Flow Estimation in Dense Foggy Scenes with Domain-Adaptive Networks

被引:0
作者
Yan W. [1 ]
Sharma A. [1 ,2 ]
Tan R.T. [1 ]
机构
[1] National University of Singapore, Electrical and Computer Engineering Department, Singapore
[2] Agency for Science, Technology and Research, Singapore
来源
IEEE Transactions on Artificial Intelligence | 2023年 / 4卷 / 06期
关键词
Fog degraded videos; optical flow; semisupervised deep learning;
D O I
10.1109/TAI.2022.3221064
中图分类号
学科分类号
摘要
Estimating optical flow in dense foggy scenes is a challenging task. The basic assumptions for computing flow such as brightness and gradient constancy become invalid. To address the problem, we introduce a semisupervised deep learning method that can learn from real fog images without requiring the corresponding optical flow ground-truths. Our method is a multitask network, integrating the domain transformation and optical flow networks in one framework. The domain transformation is performed between foggy and clean domains. Under our semisupervised training strategy, we train our method in a supervised manner with a pair of synthetic fog images, their corresponding clean images and optical flow ground-truths. Subsequently, given a pair of real fog images and a pair of clean images that are not corresponding to each other (unpaired), in the next training batch, we train our network in an unsupervised manner. The supervised and unsupervised training processes are alternated iteratively. Since our method relies on unsupervised learning for real data, we show that it can be used for test-time training. We show in our experiments that performing test-time training improves the results further on our test data. Extensive experiments show that our method outperforms the state-of-the-art methods in estimating optical flow in dense foggy scenes. © 2020 IEEE.
引用
收藏
页码:1777 / 1788
页数:11
相关论文
共 59 条
[1]  
Weinzaepfel P., Revaud J., Harchaoui Z., Schmid C., DeepFlow: Large displacement optical flow with deep matching, Proc. IEEE Int. Conf. Comput. Vis., pp. 1385-1392, (2013)
[2]  
Dosovitskiy A., Et al., FlowNet: Learning optical flow with convolutional networks, Proc. IEEE Int. Conf. Comput. Vis., pp. 2758-2766, (2015)
[3]  
Ilg E., Mayer N., Saikia T., Keuper M., Dosovitskiy A., Brox T., FlowNet 2.0: Evolution of optical flow estimation with deep networks, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2462-2470, (2016)
[4]  
Sun D., Yang X., Liu M.-Y., Kautz J., PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 8934-8943, (2018)
[5]  
Ranjan A., Black M.J., Optical flowestimation using a spatial pyramid network, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 4161-4170, (2017)
[6]  
Xu J., Ranftl R., Koltun V., Accurate optical flow via direct cost volume processing, Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1289-1297, (2017)
[7]  
Bijelic M., Gruber T., Ritter W., A benchmark for LiDAR sensors in fog: Is detection breaking down?, Proc. IEEE Intell. Veh. Symp., pp. 760-767, (2018)
[8]  
Yan W., Sharma A., Tan R.T., Optical flow in dense foggy scenes using semi-supervised learning, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 13259-13268, (2020)
[9]  
Fortun D., Bouthemy P., Kervrann C., Optical flow modeling and computation: A survey, Comput. Vis. Image Understanding, 134, pp. 1-21, (2015)
[10]  
Ronneberger O., Fischer P., Brox T., U-Net: Convolutional networks for biomedical image segmentation, Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Interv., pp. 234-241, (2015)