Personalized origin-destination travel time estimation with active adversarial inverse reinforcement learning and Transformer

被引:1
作者
Liu, Shan [1 ]
Zhang, Ya [1 ]
Wang, Zhengli [2 ]
Liu, Xiang [3 ]
Yang, Hai [4 ]
机构
[1] Southeast Univ, Sch Automat, Nanjing 210096, Peoples R China
[2] Nanjing Univ, Sch Management & Engn, Nanjing 210093, Peoples R China
[3] Jiangnan Univ, Sch Internet Things Engn, Wuxi 214122, Peoples R China
[4] Hong Kong Univ Sci & Technol, Dept Civil & Environm Engn, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Travel time estimation; Inverse reinforcement learning; Personalized route preference; Active learning; Transformer; PREDICTION; PATH;
D O I
10.1016/j.tre.2024.103839
中图分类号
F [经济];
学科分类号
02 ;
摘要
Travel time estimation is important for instant delivery, vehicle routing, and ride-hailing. Most studies estimate the travel time of specified routes, and only a few studies pay attention to origin-destination travel time estimation (OD-TTE) without a specified route. Moreover, most of these studies on OD-TTE ignore the personalized route preference and the cost of data annotation. To fill this research gap, we analyze the individual route preference and propose a personalized origin-destination travel time estimation method based on active adversarial inverse reinforcement learning (AA-IRL) and Transformer. To analyze the personalized route preference, we integrate adversarial inverse reinforcement learning with active learning, which effectively reduces the cost of sample annotation. After inferring the possible routes, we propose AdaBoost multi-fusion graph convolutional Transformer network (AMGC-Transformer) for travel time estimation. Numerical experiments conducted on ride-hailing and online food delivery trajectories in China validate the advantage of our method. Compared to relevant studies, our approach can improve F1-score of route inference by 2.50-3.35% and reduce the mean absolute error of OD-TTE by 7.44-11.66%.
引用
收藏
页数:17
相关论文
共 58 条
[1]   An integrated feature learning approach using deep learning for travel time prediction [J].
Abdollahi, Mohammad ;
Khaleghi, Tannaz ;
Yang, Kai .
EXPERT SYSTEMS WITH APPLICATIONS, 2020, 139
[2]   A survey of inverse reinforcement learning [J].
Adams, Stephen ;
Cody, Tyler ;
Beling, Peter A. .
ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (06) :4307-4346
[3]   Markov-game modeling of cyclist-pedestrian interactions in shared spaces: A multi-agent adversarial inverse reinforcement learning approach [J].
Alsaleh, Rushdi ;
Sayed, Tarek .
TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2021, 128
[4]   A survey of inverse reinforcement learning: Challenges, methods and progress [J].
Arora, Saurabh ;
Doshi, Prashant .
ARTIFICIAL INTELLIGENCE, 2021, 297 (297)
[5]   XGBoost: A Scalable Tree Boosting System [J].
Chen, Tianqi ;
Guestrin, Carlos .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :785-794
[6]   Training Data Subset Search With Ensemble Active Learning [J].
Chitta, Kashyap ;
Alvarez, Jose M. ;
Haussmann, Elmar ;
Farabet, Clement .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) :14741-14752
[7]   Passive and Active Learning of Driver Behavior from Electric Vehicles [J].
Comuni, Federica ;
Meszaros, Christopher ;
Akerblom, Niklas ;
Chehreghani, Morteza Haghir .
2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, :929-936
[8]   Dynamic travel time prediction using data clustering and genetic programming [J].
Elhenawy, Mohammed ;
Chen, Hao ;
Rakha, Hesham A. .
TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2014, 42 :82-98
[9]   ConSTGAT: Contextual Spatial-Temporal Graph Attention Network for Travel Time Estimation at Baidu Maps [J].
Fang, Xiaomin ;
Huang, Jizhou ;
Wang, Fan ;
Zeng, Lingke ;
Liang, Haijin ;
Wang, Haifeng .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :2697-2705
[10]  
Fu J., 2018, PROC INT C LEARN REP, P1