Combining Manifold Learning and Neural Field Dynamics for Multimodal Fusion

被引:1
作者
Forest, Simon [1 ,2 ]
Quinton, Jean-Charles [1 ]
Lefort, Mathieu [2 ]
机构
[1] Univ Grenoble Alpes, LJK, Grenoble INP, CNRS,UMR 5224, F-38000 Grenoble, France
[2] Univ Lyon, LIRIS, INSA Lyon, CNRS,UCBL,UMR 5205, F-69622 Villeurbanne, France
来源
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2022年
关键词
multimodal fusion; growing neural gas; manifold learning; dynamic neural field; selective attention; MODEL; REPRESENTATION; ARCHITECTURE; NETWORK;
D O I
10.1109/IJCNN55064.2022.9892614
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For interactivity and cost-efficiency purposes, both biological and artificial agents (e.g., robots) usually rely on sets of complementary sensors. Each sensor samples information from only a subset of the environment, with both the subset and the precision of signals varying through time depending on the agent-environment configuration. Agents must therefore perform multimodal fusion to select and filter relevant information by contrasting the shortcomings and redundancies of different modalities. For that purpose, we propose to combine a classical off-the-shelf manifold learning algorithm with dynamic neural fields (DNF), a training-free bio-inspired model of competition amid topologically-encoded information. Through the adaptation of DNF to irregular multimodal topologies, this coupling exhibits interesting properties, promising reliable localizations enhanced by the selection and attentional capabilities of DNF. In particular, the application of our method to audiovisual datasets (with direct ties to either psychophysics or robotics) shows merged perceptions relying on the spatially-dependent precision of each modality, and robustness to irrelevant features.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Deep Learning Driven Multimodal Fusion For Automated Deception Detection
    Gogate, Mandar
    Adeel, Ahsan
    Hussain, Amir
    2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017, : 1184 - 1189
  • [42] Multimodal Fusion With Block Term Decomposition for Asynchronous Federated Learning
    Gao, Min
    Zheng, Haifeng
    Du, Mengxuan
    Feng, Xinxin
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (12) : 14083 - 14093
  • [43] Multimodal Named Entity Recognition with Bottleneck Fusion and Contrastive Learning
    Wang, Peng
    Chen, Xiaohang
    Shang, Ziyu
    Ke, Wenjun
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2023, E106D (04) : 545 - 555
  • [44] A multimodal differential privacy framework based on fusion representation learning
    Cai, Chaoxin
    Sang, Yingpeng
    Tian, Hui
    CONNECTION SCIENCE, 2022, 34 (01) : 2219 - 2239
  • [45] Multimodal data fusion for cancer biomarker discovery with deep learning
    Steyaert, Sandra
    Pizurica, Marija
    Nagaraj, Divya
    Khandelwal, Priya
    Hernandez-Boussard, Tina
    Gentles, Andrew J.
    Gevaert, Olivier
    NATURE MACHINE INTELLIGENCE, 2023, 5 (04) : 351 - 362
  • [46] LeGFusion: Locally Enhanced Global Learning for Multimodal Image Fusion
    Zhang, Jing
    Liu, Aiping
    Liu, Yu
    Qiu, Bensheng
    Xie, Qingguo
    Chen, Xun
    IEEE SENSORS JOURNAL, 2024, 24 (08) : 12806 - 12818
  • [47] Multi-Task Learning and Multimodal Fusion for Road Segmentation
    Cheng, Bowen
    Tian, Miaomiao
    Jiang, Shuai
    Liu, Weiwei
    Pang, Yalong
    IEEE ACCESS, 2023, 11 : 18947 - 18959
  • [48] Deep learning-based late fusion of multimodal information for emotion classification of music video
    Pandeya, Yagya Raj
    Lee, Joonwhoan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (02) : 2887 - 2905
  • [49] On the Relationship between Manifold Learning Latent Dynamics and Zero Dynamics for Human Bipedal Walking
    Chen, Kuo
    Yi, Jingang
    2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 971 - 976
  • [50] Synaptic delays shape dynamics and function in multimodal neural motifs
    Qie, Xinxin
    Zang, Jie
    Liu, Shenquan
    Shilnikov, Andrey L.
    CHAOS, 2025, 35 (04)