A Block Object Detection Method Based on Feature Fusion Networks for Autonomous Vehicles

被引:9
作者
Meng, Qiao [1 ,2 ]
Song, Huansheng [1 ]
Li, Gang [1 ]
Zhang, Yu'an [2 ]
Zhang, Xiangqing [1 ]
机构
[1] Changan Univ, Sch Informat Engn, Xian 710064, Shaanxi, Peoples R China
[2] Qinghai Univ, Comp Technol & Applicat Dept, Xining 810016, Qinghai, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1155/2019/4042624
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Nowadays, automatic multi-objective detection remains a challenging problem for autonomous vehicle technologies. In the past decades, deep learning has been demonstrated successful for multi-objective detection, such as the Single Shot Multibox Detector (SSD) model. The current trend is to train the deep Convolutional Neural Networks (CNNs) with online autonomous vehicle datasets. However, network performance usually degrades when small objects are detected. Moreover, the existing autonomous vehicle datasets could not meet the need for domestic traffic environment. To improve the detection performance of small objects and ensure the validity of the dataset, we propose a new method. Specifically, the original images are divided into blocks as input to a VGG-16 network which add the feature map fusion after CNNs. Moreover, the image pyramid is built to project all the blocks detection results at the original objects size as much as possible. In addition to improving the detection method, a new autonomous driving vehicle dataset is created, in which the object categories and labelling criteria are defined, and a data augmentation method is proposed. The experimental results on the new datasets show that the performance of the proposed method is greatly improved, especially for small objects detection in large image. Moreover, the proposed method is adaptive to complex climatic conditions and contributes a lot for autonomous vehicle perception and planning.
引用
收藏
页数:14
相关论文
共 50 条
[31]   Feature Fusion in Part-Based Object Detection [J].
Koyuncu, Murat ;
Cetinkaya, Basar .
2015 23RD SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2015, :565-568
[32]   An infrared object detection algorithm based on feature fusion [J].
Meng, Ying ;
Ma, Chao ;
Zeng, Yaoyuan ;
An, Wei .
SECOND IYSF ACADEMIC SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND COMPUTER ENGINEERING, 2021, 12079
[33]   Autonomous Aerial Vehicle Object Detection Based on Spatial Perception and Multiscale Semantic and Detail Feature Fusion [J].
Rao, Wei ;
Chen, Siyuan ;
Li, Dan .
IEEE ACCESS, 2025, 13 :42897-42909
[34]   Small Object Detection Method Based on Weighted Feature Fusion and CSMA Attention Module [J].
Peng, Chao ;
Zhu, Meng ;
Ren, Honge ;
Emam, Mahmoud .
ELECTRONICS, 2022, 11 (16)
[35]   Detection of Obstacles Based on Information Fusion for Autonomous Agricultural Vehicles [J].
Xue, Jinlin ;
Dong, Shuxian ;
Fan, Bowen .
Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2018, 49 :29-34
[36]   Feature cross-fusion block net for accurate and efficient object detection [J].
Zhang, Xiuling ;
Li, Jinxiang ;
Zhou, Kaixuan ;
Ma, Kai .
JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (01)
[37]   Object Detection in Aerial Images Using Feature Fusion Deep Networks [J].
Long, Hao ;
Chung, Yinung ;
Liu, Zhenbao ;
Bu, Shuhui .
IEEE ACCESS, 2019, 7 :30980-30990
[38]   Enhanced Object Detection in Autonomous Vehicles through LiDAR-Camera Sensor Fusion [J].
Dai, Zhongmou ;
Guan, Zhiwei ;
Chen, Qiang ;
Xu, Yi ;
Sun, Fengyi .
WORLD ELECTRIC VEHICLE JOURNAL, 2024, 15 (07)
[39]   Procuring cooperative intelligence in autonomous vehicles for object detection through data fusion approach [J].
Daniel, Alfred ;
Subburathinam, Karthik ;
Anand Muthu, Bala ;
Rajkumar, Newlin ;
Kadry, Seifedine ;
Mahendran, Rakesh Kumar ;
Pandian, Sanjeevi .
IET INTELLIGENT TRANSPORT SYSTEMS, 2020, 14 (11) :1410-1417
[40]   Improving Radar-Camera Fusion-based 3D Object Detection for Autonomous Vehicles [J].
Kurniawan, Irfan Tito ;
Trilaksono, Bambang Riyanto .
2022 12TH INTERNATIONAL CONFERENCE ON SYSTEM ENGINEERING AND TECHNOLOGY (ICSET 2022), 2022, :42-47