Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

被引:0
作者
Abhinav Valada
Rohit Mohan
Wolfram Burgard
机构
[1] University of Freiburg,
[2] Toyota Research Institute,undefined
来源
International Journal of Computer Vision | 2020年 / 128卷
关键词
Semantic segmentation; Multimodal fusion; Scene understanding; Model adaptation; Deep learning;
D O I
暂无
中图分类号
学科分类号
摘要
Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on the relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed self-supervised model adaptation fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. In addition, we propose a computationally efficient unimodal segmentation architecture termed AdapNet++ that incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling that has a larger effective receptive field with more than 10×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10\,\times $$\end{document} fewer parameters, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest benchmarks demonstrate that both our unimodal and multimodal architectures achieve state-of-the-art performance while simultaneously being efficient in terms of parameters and inference time as well as demonstrating substantial robustness in adverse perceptual conditions.
引用
收藏
页码:1239 / 1285
页数:46
相关论文
共 35 条
[1]  
Anwar S(2017)Structured pruning of deep convolutional neural networks ACM Journal on Emerging Technologies in Computing Systems (JETC) 13 32-32
[2]  
Hwang K(2018)Beyond rgb: Very high resolution urban remote sensing with multimodal deep networks ISPRS Journal of Photogrammetry and Remote Sensing 140 20-3579
[3]  
Sung W(2016)Similarity-based fusion of meg and fmri reveals spatio-temporal dynamics in human cortex during visual object recognition Cerebral Cortex 26 3563-136
[4]  
Audebert N(2015)The pascal visual object classes challenge: A retrospective International Journal of Computer Vision 111 98-863
[5]  
Le Saux B(2004)What do we see when we glance at a scene? Journal of Vision 4 863-1916
[6]  
Lefèvre S(2015)Spatial pyramid pooling in deep convolutional networks for visual recognition IEEE Transactions on Pattern Analysis and Machine Intelligence 37 1904-324
[7]  
Cichy RM(1999)Modis vegetation index (mod13) Algorithm Theoretical Basis Document 3 213-4414
[8]  
Pantazis D(2009)Robust higher order potentials for enforcing label consistency International Journal of Computer Vision 82 302-272
[9]  
Oliva A(2018)Vlocnet++: Deep multitask learning for semantic visual localization and odometry IEEE Robotics and Automation Letters (RA-L) 3 4407-undefined
[10]  
Everingham M(2018)Erfnet: Efficient residual factorized convnet for real-time semantic segmentation IEEE Transactions on Intelligent Transportation Systems 19 263-undefined