Continual learning-based trajectory prediction with memory augmented networks

被引:24
作者
Yang, Biao [1 ,3 ]
Fan, Fucheng [2 ]
Ni, Rongrong [3 ]
Li, Jie [3 ]
Kiong, Loochu [4 ]
Liu, Xiaofeng [3 ]
机构
[1] Changzhou Univ, Sch Microelect & Control Engn, Changzhou 213000, Peoples R China
[2] Changzhou Univ, Sch Comp Sci & Artificial Intelligence, Changzhou 213000, Peoples R China
[3] Hohai Univ, Coll Internet Things Engn, Changzhou 213000, Peoples R China
[4] Univ Malaya, Dept Artificial Intelligence, Kuala Lumpur 50603, Malaysia
关键词
Trajectory prediction; Multi -hop attention; Memory augmented neural networks; Continual learning; Catastrophic forgetting;
D O I
10.1016/j.knosys.2022.110022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Forecasting pedestrian trajectories is widely used in mobile agents such as self-driving vehicles and social robots. Deep neural network-based trajectory prediction models precisely predict pedestrian trajectories after training. However, the prediction models fail to avoid catastrophic forgetting when the data distribution shifts during continual learning, making it incredible to deploy the models on agents in real environments. A continual trajectory prediction method with memory augmented networks, CLTP-MAN, is proposed to handle the catastrophic forgetting issue by introducing a memory augmented network with sparse experience replay. CLTP-MAN comprises an external memory module, a memory extraction module, and a trajectory prediction module. The external memory module contains prior knowledge useful for trajectory prediction. The memory extraction module can read or write the key-value memories with a trainable controller. Last, the trajectory prediction module performs long-trajectory prediction by introducing a multi-hop attention mechanism to extract pivotal information from the external memory. Meanwhile, the catastrophic forgetting issue is handled through sparse experience replay. The two benchmarking datasets ETH/UCY and SDD are reintegrated according to the needs for continual learning to conduct quantitative and qualitative evaluations. The results verify that benefiting from external memory and the multi-hop attention mechanism, CLTPMAN has better generalization than several mainstream methods. Sparse experience replay effectively reduces catastrophic forgetting, leading to reliable deployments on mobile agents. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
empty
未找到相关数据