Reliability in Semantic Segmentation: Can We Use Synthetic Data?

被引:0
作者
Loiseau, Thibaut [1 ,2 ]
Tuan-Hung Vu [1 ]
Chen, Mickael [1 ]
Perez, Patrick [3 ]
Cord, Matthieu [1 ,4 ]
机构
[1] Valeo Ai, Paris, France
[2] Univ Gustave Eiffel, LIGM, CNRS, Ecole Ponts, Marne La Vallee, France
[3] Kyutai, Paris, France
[4] Sorbonne Univ, Paris, France
来源
COMPUTER VISION - ECCV 2024, PT XXIII | 2025年 / 15081卷
关键词
D O I
10.1007/978-3-031-73337-6_25
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Assessing the robustness of perception models to covariate shifts and their ability to detect out-of-distribution (OOD) inputs is crucial for safety-critical applications such as autonomous vehicles. By nature of such applications, however, the relevant data is difficult to collect and annotate. In this paper, we show for the first time how synthetic data can be specifically generated to assess comprehensively the real-world reliability of semantic segmentation models. By fine-tuning Stable Diffusion [31] with only in-domain data, we perform zero-shot generation of visual scenes in OOD domains or inpainted with OOD objects. This synthetic data is employed to evaluate the robustness of pretrained segmenters, thereby offering insights into their performance when confronted with real edge cases. Through extensive experiments, we demonstrate a high correlation between the performance of models when evaluated on our synthetic OOD data and when evaluated on real OOD inputs, showing the relevance of such virtual testing. Furthermore, we demonstrate how our approach can be utilized to enhance the calibration and OOD detection capabilities of segmenters. Code and data are made public.
引用
收藏
页码:442 / 459
页数:18
相关论文
共 50 条
[21]   On exploring weakly supervised domain adaptation strategies for semantic segmentation using synthetic data [J].
Roberto Alcover-Couso ;
Juan C. SanMiguel ;
Marcos Escudero-Viñolo ;
Alvaro Garcia-Martin .
Multimedia Tools and Applications, 2023, 82 :35879-35911
[22]   Deep Semantic Instance Segmentation of Tree-like Structures Using Synthetic Data [J].
Halupka, Kerry ;
Garnavi, Rahil ;
Moore, Stephen .
2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, :1713-1722
[23]   DELINE8K: A Synthetic Data Pipeline for the Semantic Segmentation of Historical Documents [J].
Archibald, Taylor ;
Martinez, Tony .
DOCUMENT ANALYSIS AND RECOGNITION-ICDAR 2024, PT III, 2024, 14806 :289-304
[24]   RELIABILITY OF PROPERTY DATA, OR, WHOSE GUESS SHALL WE USE [J].
KIEFFER, LJ .
JOURNAL OF CHEMICAL DOCUMENTATION, 1969, 9 (03) :167-&
[25]   Enhancing point cloud semantic segmentation in the data-scarce domain of industrial plants through synthetic data [J].
Noichl, Florian ;
Collins, Fiona C. ;
Braun, Alexander ;
Borrmann, Andre .
COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2024, 39 (10) :1530-1549
[26]   CAN: CONTEXTUAL AGGREGATING NETWORK FOR SEMANTIC SEGMENTATION [J].
Cong, Dechun ;
Zhou, Quan ;
Cheng, Jie ;
Wu, Xiaofu ;
Zhang, Suofei ;
Ou, Weihua ;
Lu, Huimin .
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, :1892-1896
[27]   Where Can We Help? A Visual Analytics Approach to Diagnosing and Improving Semantic Segmentation of Movable Objects [J].
He, Wenbin ;
Zou, Lincan ;
Shekar, Arvind Kumar ;
Gou, Liang ;
Ren, Liu .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (01) :1040-1050
[28]   SYNTHMANTICLIDAR: A SYNTHETIC DATASET FOR SEMANTIC SEGMENTATION ON LIDAR IMAGING [J].
Montalvo, Javier ;
Carballeira, Pablo ;
Garcia-Martin, Alvaro .
2024 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2024, :137-143
[29]   Semantic Segmentation of Panoramic Images Using a Synthetic Dataset [J].
Xu, Yuanyou ;
Wang, Kaiwei ;
Yang, Kailun ;
Sun, Dongming ;
Fu, Jia .
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING IN DEFENSE APPLICATIONS, 2019, 11169
[30]   Impact of data smoothing on semantic segmentation [J].
Nuhman Ul Haq ;
Zia ur Rehman ;
Ahmad Khan ;
Ahmad Din ;
Sajid Shah ;
Abrar Ullah ;
Fawad Qayum .
Neural Computing and Applications, 2022, 34 :8345-8354