Deep multi-modal data analysis and fusion for robust scene understanding in CAVs

被引:1
|
作者
Papandreou, Andreas [1 ]
Kloukiniotis, Andreas [1 ]
Lalos, Aris [2 ]
Moustakas, Konstantinos [1 ]
机构
[1] Univ Patras, Dept Elect & Comp Engn, Univ Campus, Rion 26504, Greece
[2] ISI Ind Syst Inst, Patras Sci Pk Bldg, Patras, Greece
关键词
autonomous vehicles; multi-modal scene analysis; adversarial attacks;
D O I
10.1109/MMSP53017.2021.9733604
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep learning (DL) tends to be the integral part of Autonomous Vehicles (AVs). Therefore the development of scene analysis modules that are robust to various vulnerabilities such as adversarial inputs or cyber-attacks is becoming an imperative need for the future AV perception systems. In this paper, we deal with this issue by exploring the recent progress in Artificial Intelligence (AI) and Machine Learning (ML) to provide holistic situational awareness and eliminate the effect of the previous attacks on the scene analysis modules. We propose novel multi-modal approaches against which achieve robustness to adversarial attacks, by appropriately modifying the analysis Neural networks and by utilizing late fusion methods. More specifically, we propose a holistic approach by adding new layers to a 2D segmentation DL model enhancing its robustness to adversarial noise. Then, a novel late fusion technique has been applied, by extracting direct features from the 3D space and project them into the 2D segmented space for identifying inconsistencies. Extensive evaluation studies using the KITTI odometry dataset provide promising performance results under various types of noise.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Scene-Aware Prompt for Multi-modal Dialogue Understanding and Generation
    Li, Bin
    Weng, Yixuan
    Ma, Ziyu
    Sun, Bin
    Li, Shutao
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2022, PT II, 2022, 13552 : 179 - 191
  • [32] A Novel Multi-Modal Network-Based Dynamic Scene Understanding
    Uddin, Md Azher
    Joolee, Joolekha Bibi
    Lee, Young-Koo
    Sohn, Kyung-Ah
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (01)
  • [33] Efficient Multi-Modal Fusion with Diversity Analysis
    Qu, Shuhui
    Kang, Yan
    Lee, Janghwan
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2663 - 2670
  • [34] Robust multi-modal biometric fusion via multiple SVMs
    Dinerstein, Sabra
    Dinerstein, Jonathan
    Ventura, Dan
    2007 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-8, 2007, : 1029 - +
  • [35] Dynamic Brightness Adaptation for Robust Multi-modal Image Fusion
    Sun, Yiming
    Cao, Bing
    Zhu, Pengfei
    Hu, Qinghua
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 1317 - 1325
  • [36] FUSION OF MULTI-MODAL NEUROIMAGING DATA AND ASSOCIATION WITH COGNITIVE DATA
    LoPresto, Mark D.
    Akhonda, M. A. B. S.
    Calhoun, Vince D.
    Adali, Tülay
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [37] ESCAPE Data Collection for Multi-Modal Data Fusion Research
    Zulch, Peter
    Distasio, Marcello
    Cushman, Todd
    Wilson, Brian
    Hart, Ben
    Blasch, Erik
    2019 IEEE AEROSPACE CONFERENCE, 2019,
  • [38] DFMM-Precip: Deep Fusion of Multi-Modal Data for Accurate Precipitation Forecasting
    Li, Jinwen
    Wu, Li
    Liu, Jiarui
    Wang, Xiaoying
    Xue, Wei
    Water (Switzerland), 2024, 16 (24)
  • [39] An Abnormal Behavior Detection Method Leveraging Multi-modal Data Fusion and Deep Mining
    Tian, Xinyu
    Zheng, Qinghe
    Jiang, Nan
    IAENG International Journal of Applied Mathematics, 2021, 51 (01)
  • [40] MULTI-MODAL REMOTE SENSING DATA FUSION FRAMEWORK
    Ghaffar, M. A. A.
    Vu, T. T.
    Maul, T. H.
    FOSS4G-EUROPE 2017 - ACADEMIC TRACK, 2017, 42-4 (W2): : 85 - 89