Visual Perception for Autonomous Driving inspired by Convergence-Divergence Zones

被引:0
作者
Plebe, Alice [1 ]
Da Lio, Mauro [2 ]
机构
[1] Univ Trento, Dept Informat Engn & Comp Sci, Trento, Italy
[2] Univ Trento, Dept Ind Engn, Trento, Italy
来源
PROCEEDINGS OF THE 2019 11TH INTERNATIONAL SYMPOSIUM ON IMAGE AND SIGNAL PROCESSING AND ANALYSIS (ISPA 2019) | 2019年
关键词
mental imagery; deep learning; autonomous driving; variational autoencoder; SIMULATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visual perception is, by large, the main source of information used by humans when driving. Therefore, it is natural and appropriate to rely heavily on vision analysis for autonomous driving, as done in most projects. However, there is a significant difference between the common approach of vision in autonomous driving, and visual perceptions in humans when driving. Essentially, image analysis is often regarded as an isolated and autonomous module, which high level output drives the control modules of the vehicle. The direction here presented is different, we try to take inspiration from the brain architecture that makes humans so effective in learning tasks as complex as the one of driving. There are two key theories about biological perception grounding our development. The first is the view of the thinking activity as a simulation of perceptions and action, as theorized by Hesslow. The second is the Convergence-Divergence Zones (CDZs) mechanism of mental simulation connecting the process of extracting features from a visual scene, to the inverse process of imagining a scene content by decoding features stored in memory. We will show how our model, based on semi-supervised variational autoencoder, is a rather faithful implementation of these two basic neurocognitive theories.
引用
收藏
页码:204 / 208
页数:5
相关论文
共 50 条
[41]   A survey on occupancy perception for autonomous driving: The information fusion perspective [J].
Xu, Huaiyuan ;
Chen, Junliang ;
Meng, Shiyu ;
Wang, Yi ;
Chau, Lap-Pui .
INFORMATION FUSION, 2025, 114
[42]   Advanced Techniques for Perception and Localization in Autonomous Driving Systems: A Survey [J].
Qusay Sellat ;
Kanagachidambaresan Ramasubramanian .
OPTICAL MEMORY AND NEURAL NETWORKS, 2022, 31 (02) :123-144
[43]   Sensing, Perception and Decision for Deep Learning Based Autonomous Driving [J].
Yamashita, Takayoshi .
DISTRIBUTED, AMBIENT AND PERVASIVE INTERACTIONS: TECHNOLOGIES AND CONTEXTS, DAPI 2018, PT II, 2018, 10922 :152-163
[44]   BEV perception for autonomous driving: State of the art and future perspectives [J].
Zhao, Junhui ;
Shi, Jingyue ;
Zhuo, Li .
EXPERT SYSTEMS WITH APPLICATIONS, 2024, 258
[45]   Research on Autonomous Driving Perception based on Deep Learning Algorithm [J].
Zhou, Bolin ;
Zheng, Jihu ;
Chen, Chen ;
Yin, Pei ;
Zhai, Yang .
2019 INTERNATIONAL CONFERENCE ON IMAGE AND VIDEO PROCESSING, AND ARTIFICIAL INTELLIGENCE, 2019, 11321
[46]   Conditional Privacy: Users' Perception of Data Privacy in Autonomous Driving [J].
Brell, Teresa ;
Biermann, Hannah ;
Philipsen, Ralf ;
Ziefle, Martina .
PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON VEHICLE TECHNOLOGY AND INTELLIGENT TRANSPORT SYSTEMS (VEHITS 2019), 2019, :352-359
[47]   Multi-Task Environmental Perception Methods for Autonomous Driving [J].
Liu, Ri ;
Yang, Shubin ;
Tang, Wansha ;
Yuan, Jie ;
Chan, Qiqing ;
Yang, Yunchuan .
SENSORS, 2024, 24 (17)
[48]   Multi-Modality Fusion Perception and Computing in Autonomous Driving [J].
Zhang Y. ;
Zhang S. ;
Zhang Y. ;
Ji J. ;
Duan Y. ;
Huang Y. ;
Peng J. ;
Zhang Y. .
Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (09) :1781-1799
[49]   A Method of Lunar Autonomous Driving Perception Planning Based on Hybrid A* [J].
Hu, Tao ;
Cao, Tao ;
Zheng, Bo ;
Qian, Zhouyuan ;
Han, Fei ;
He, Liang .
2024 3RD CONFERENCE ON FULLY ACTUATED SYSTEM THEORY AND APPLICATIONS, FASTA 2024, 2024, :1447-1452
[50]   RADAR PERCEPTION WITH SCALABLE CONNECTIVE TEMPORAL RELATIONS FOR AUTONOMOUS DRIVING [J].
Yataka, Ryoma ;
Wang, Pu ;
Boufounos, Petros ;
Takahashi, Ryuhei .
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2024), 2024, :13266-13270