A lightweight and style-robust neural network for autonomous driving in end side devices

被引:4
作者
Han, Sheng [1 ]
Lin, Youfang [1 ]
Guo, Zhihui [1 ]
Lv, Kai [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous driving; imitation learning; image translation; StarGAN-V2;
D O I
10.1080/09540091.2022.2155613
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The autonomous driving algorithm studied in this paper makes a ground vehicle capable of sensing its environment via visual images and moving safely with little or no human input. Due to the limitation of the computing power of end side devices, the autonomous driving algorithm should adopt a lightweight model and have high performance. Conditional imitation learning has been proved an efficient and promising policy for autonomous driving and other applications on end side devices due to its high performance and offline characteristics. In driving scenarios, the images captured in different weathers have different styles, which are influenced by various interference factors, such as illumination, raindrops, etc. These interference factors bring challenges to the perception ability of deep models, thus affecting the decision-making process in autonomous driving. The first contribution of this paper is to investigate the performance gap of driving models under different weather conditions. Following the investigation, we utilise StarGAN-V2 to translate images from source domains into the target clear sunset domain. Based on the images translated by StarGAN-V2, we propose Conditional Imitation Learning with ResNet backbone named Star-CILRS. The proposed method is able to convert an image to multiple styles using only one single model, making our method easier to deploy on end side devices. Visualization results show that Star-CILRS can eliminate some environmental interference factors. Our method outperforms other methods and the success rate values in different tasks are 98%, 74%, and 22%, respectively.
引用
收藏
页数:18
相关论文
共 31 条
[1]   Reliable Estimation for Health Index of Transformer Oil Based on Novel Combined Predictive Maintenance Techniques [J].
Badawi, Mohamed ;
Ibrahim, Shimaa A. ;
Mansour, Diaa-Eldin A. ;
El-Faraskoury, Adel A. ;
Ward, Sayed A. ;
Mahmoud, Karar ;
Lehtonen, Matti ;
Darwish, Mohamed M. F. .
IEEE ACCESS, 2022, 10 :25954-25972
[2]  
Bai X., 2022, 3D DATA COMPUTATION
[3]   Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments [J].
Bai, Xiao ;
Wang, Xiang ;
Liu, Xianglong ;
Liu, Qiang ;
Song, Jingkuan ;
Sebe, Nicu ;
Kim, Been .
PATTERN RECOGNITION, 2021, 120
[4]   Anomaly Detection in Autonomous Driving: A Survey [J].
Bogdoll, Daniel ;
Nitsche, Maximilian ;
Zoellner, J. Marius .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, :4487-4498
[5]   StarGAN v2: Diverse Image Synthesis for Multiple Domains [J].
Choi, Yunjey ;
Uh, Youngjung ;
Yoo, Jaejun ;
Ha, Jung-Woo .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :8185-8194
[6]   StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[7]   Exploring the Limitations of Behavior Cloning for Autonomous Driving [J].
Codevilla, Felipe ;
Santana, Eder ;
Lopez, Antonio M. ;
Gaidon, Adrien .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9328-9337
[8]  
Codevilla F, 2018, IEEE INT CONF ROBOT, P4693
[9]  
Dosovitskiy A, 2017, PR MACH LEARN RES, V78
[10]   Improved grey wolf optimizer based on opposition and quasi learning approaches for optimization: case study autonomous vehicle including vision system [J].
Elsisi, M. .
ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (07) :5597-5620