Research on multi-task convolutional neural network facing to AR-HUD

被引:0
作者
Feng M. [1 ]
Bu C. [1 ]
Xiao H. [1 ]
机构
[1] College of Advanced Manufacturing Engineering, Chongqing University of Posts and Telecommunications, Chongqing
来源
Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument | 2021年 / 42卷 / 03期
关键词
Augmented reality-head up display(AR-HUD); Multi-task convolutional neural network; Semantic segmentation; Target detection;
D O I
10.19650/j.cnki.cjsi.J2107395
中图分类号
学科分类号
摘要
AR-HUD has been widely used in automobile. Its environment perception module needs to complete target detection, lane segmentation and other tasks, but multiple deep neural networks running at the same time will consume too much computing resources. In order to solve this problem, a lightweight multi-task convolutional neural network (DYPNet) applied in AR-HUD environment perception is proposed in this paper. DYPNet is based on YOLOv3-tiny framework, and fuses the pyramid pooling model, DenseNet dense connection structure and CSPNet network model, which greatly reduces the computing resources consumption without reducing the accuracy. Aiming at the problem that the neural network is difficult to train, a linear weighted sum loss function based on dynamic loss weight is proposed, which makes the loss of the sub-networks tend to decline and converge synchronously. After training and testing on the open data set BDD100K, the results show that the detection mAP and segmentation mIOU of the neural network are 30% and 77.14%, respectively, and after accelerating with TensorRt, it can reach about 15 FPS on Jetson TX2, which has met the application requirements of AR-HUD. It has been successfully applied to the vehicle AR-HUD. © 2021, Science Press. All right reserved.
引用
收藏
页码:241 / 250
页数:9
相关论文
共 29 条
[1]  
MCGREGOR D., A flight investigation of various stability augmentation systems for a jet-lift V/Stol aircraft (Hawker-Siddeley P1127) using an airborne simulator, National Research Council Aeronautical Report LR, 500, pp. 1-41, (1968)
[2]  
PARK H S, KIM K H., AR-based vehicular safety information system for forward collision warning, International Conference on Virtual, Augmented and Mixed Reality, pp. 435-442, (2014)
[3]  
PARK H S, PARK M W, WON K H, Et al., In-vehicle AR-HUD system to provide driving-safety information, Etri Journal, 35, 6, pp. 1038-1047, (2013)
[4]  
KIM K, HWANG Y., The usefulness of augmenting reality on vehicle head-up display, Advances in Human Aspects of Transportation, 484, pp. 655-662, (2017)
[5]  
LI ZH, ZHOU X, ZHENG Y SH., Research on the design of automobile driving assistant aystem based on AR-HUD, Journal of Wuhan University of Technology(Transportation Science & Engineering), 41, 6, pp. 924-928, (2017)
[6]  
AN ZH, XU X P, YANG J H, Et al., Design and research of augmented reality head-up display system combined with image semantic segmentation, Acta Optica Sinica, 38, 7, pp. 85-91, (2018)
[7]  
LIU W, ANGUELOV D, ERHAN D, Et al., SSD: Single shot multibox detector, European Conference on Computer Vision, pp. 21-37, (2016)
[8]  
ABDI L, MEDDEB A., Driver information system: A combination of augmented reality, deep learning and vehicular Ad-hoc networks, Multimedia Tools and Applications, 77, 12, pp. 14673-14703, (2018)
[9]  
REDMON J, DIVVALA S, GIRSHICK R, Et al., You only look once: Unified, real-time object detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788, (2016)
[10]  
RUDER S, BINGEL J, AUGENSTEIN I, Et al., Latent multi-task architecture learning, Proceedings of the AAAI Conference on Artificial Intelligence, pp. 4822-4829, (2019)