GeoGail: A Model-Based Imitation Learning Framework for Human Trajectory Synthesizing

被引:0
|
作者
Wu, Yuchen [1 ]
Wang, Huandong [2 ]
Gao, Changzheng [2 ]
Jin, Depeng [2 ]
Li, Yong [2 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA USA
[2] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRist, Dept Elect Engn, Beijing, Peoples R China
关键词
Mobility Trajectory; Imitation Learning; Generative Models;
D O I
10.1145/3699961
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Synthesized human trajectories are crucial for a large number of applications. Existing solutions are mainly based on the generative adversarial network (GAN), which is limited due to the lack of modeling the human decision-making process. In this article, we propose a novel imitation learning-based method to synthesize human trajectories. This model utilizes a novel semantics-based interaction mechanism between the decision- making strategy and visitations to diverse geographical locations to model them in the semantic domain in a uniform manner. To augment the modeling ability to the real-world human decision-making policy, we propose a feature extraction model to extract the internal latent factors of variation of different individuals and then propose a novel self-attention-based policy net to capture the long-term correlation of mobility and decision-making patterns. Then, to better reward users' mobility behavior, we propose a novel multi- scale reward net combined with mutual information to model the instant reward, long-term reward, and individual characteristics in a cohesive manner. Extensive experimental results on two real-world trajectory datasets show that our proposed model can synthesize the most high-quality trajectory data compared with six state-of-the-art baselines in terms of a number of key usability metrics and can well support practical applications based on trajectory data, demonstrating its effectiveness. Furthermore, our proposed method can learn explainable knowledge automatically from data, including explainable statistical features of trajectories and statistical relation between decision-making policy and features.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Model-based Imitation Learning by Probabilistic Trajectory Matching
    Englert, Peter
    Paraschos, Alexandros
    Peters, Jan
    Deisenroth, Marc Peter
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 1922 - 1927
  • [2] A Probabilistic Framework for Model-Based Imitation Learning
    Shon, Aaron P.
    Grimes, David B.
    Baker, Chris L.
    Rao, Rajesh P. N.
    PROCEEDINGS OF THE TWENTY-SIXTH ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY, 2004, : 1237 - 1242
  • [3] Probabilistic model-based imitation learning
    Englert, Peter
    Paraschos, Alexandros
    Deisenroth, Marc Peter
    Peters, Jan
    ADAPTIVE BEHAVIOR, 2013, 21 (05) : 388 - 403
  • [4] Model-based Adversarial Imitation Learning from Demonstrations and Human Reward
    Huang, Jie
    Hao, Jiangshan
    Juan, Rongshun
    Gomez, Randy
    Nakamura, Keisuke
    Li, Guangliang
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1683 - 1690
  • [5] Model-Based Imitation Learning for Urban Driving
    Hu, Anthony
    Corrado, Gianluca
    Griffiths, Nicolas
    Murez, Zak
    Gurau, Corina
    Yeo, Hudson
    Kendall, Alex
    Cipolla, Roberto
    Shotton, Jamie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid
    Veith, Eric Msp
    Logemann, Torben
    Berezin, Aleksandr
    Wellssow, Arlena
    Balduin, Stephan
    2024 12TH WORKSHOP ON MODELING AND SIMULATION OF CYBER-PHYSICAL ENERGY SYSTEMS, MSCPES, 2024,
  • [7] Imitation Learning in Industrial Robots: A Kinematics based Trajectory Generation Framework
    Jha, Abhishek
    Chiddarwar, Shital S.
    Bhute, Rohini Y.
    Alakshendra, Veer
    Nikhade, Gajanan
    Khandekar, Priya M.
    PROCEEDINGS OF THE ADVANCES IN ROBOTICS (AIR'17), 2017,
  • [8] Model-Based Imitation Learning Using Entropy Regularization of Model and Policy
    Uchibe, Eiji
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 10922 - 10929
  • [9] MobILE: Model-Based Imitation Learning From Observation Alone
    Kidambi, Rahul
    Chang, Jonathan D.
    Sun, Wen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [10] Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving
    Bronstein, Eli
    Palatucci, Mark
    Notz, Dominik
    White, Brandyn
    Kuefler, Alex
    Lu, Yiren
    Paul, Supratik
    Nikdel, Payam
    Mougin, Paul
    Chen, Hongge
    Fu, Justin
    Abrams, Austin
    Shah, Punit
    Racah, Evan
    Frenkel, Benjamin
    Whiteson, Shimon
    Anguelov, Dragomir
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 8652 - 8659