End-to-End Entity Linking with Hierarchical Reinforcement Learning

被引:0
作者
Chen, Lihan [1 ]
Zhu, Tinghui [1 ]
Liu, Jingping [2 ]
Liang, Jiaqing [3 ]
Xiao, Yanghua [1 ,4 ]
机构
[1] Fudan Univ, Shanghai Key Lab Data Sci, Sch Comp Sci, Shanghai, Peoples R China
[2] East China Univ Sci & Technol, Shanghai, Peoples R China
[3] Fudan Univ, Sch Data Sci, Shanghai, Peoples R China
[4] Fudan Aishu Cognit Intelligence Joint Res Ctr, Shanghai, Peoples R China
来源
THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4 | 2023年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Entity linking (EL) is the task of linking the text segments to the referring entities in the knowledge graph, typically decomposed into mention detection, and entity disambiguation. Compared to traditional methods treating the two tasks separately, recent end-to-end entity linking methods exploit the mutual dependency between mentions and entities to achieve better performance. However, existing end-to-end EL methods have problems utilizing the dependency of mentions and entities in the task. To this end, we propose to model the EL task as a hierarchical decision-making process and design a hierarchical reinforcement learning algorithm to solve the problem. We conduct extensive experiments to show that the proposed method achieves state-of-the-art performance in several EL benchmark datasets. Our code is publicly available at https://github.com/lhlclhl/he2eel.
引用
收藏
页码:4173 / 4181
页数:9
相关论文
共 50 条
[41]   MBJELEL: An End-to-End Knowledge Graph Entity Linking Method Applied to Civil Aviation Emergencies [J].
Qu, Jiayi ;
Wang, Jintao ;
Zhao, Zuyi ;
Chen, Xingguo .
INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
[42]   End-to-end sensorimotor control problems of AUVs with deep reinforcement learning [J].
Wu, Hui ;
Song, Shiji ;
Hsu, Yachu ;
You, Keyou ;
Wu, Cheng .
2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, :5869-5874
[43]   End-to-end RPA-like testing using reinforcement learning [J].
Paduraru, Ciprian ;
Cristea, Rares ;
Stefanescu, Alin .
2024 IEEE CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION, ICST 2024, 2024, :419-429
[44]   End-to-End Reinforcement Learning of Curative Curtailment with Partial Measurement Availability [J].
Wolf, Hinrikus ;
Boetcher, Luis ;
Bouchkati, Sarra ;
Lutat, Philipp ;
Breitung, Jens ;
Jung, Bastian ;
Moellemann, Tina ;
Todosijevic, Viktor ;
Schiefelbein-Lach, Jan ;
Pohl, Oliver ;
Ulbig, Andreas ;
Grohe, Martin .
2024 IEEE PES INNOVATIVE SMART GRID TECHNOLOGIES EUROPE, ISGT EUROPE, 2024,
[45]   Deep Reinforcement Learning for End-to-End Network Slicing: Challenges and Solutions [J].
Liu, Qiang ;
Choi, Nakjung ;
Han, Tao .
IEEE NETWORK, 2023, 37 (02) :222-228
[46]   Improvement of End-to-end Automatic Driving Algorithm Based on Reinforcement Learning [J].
Tang, Jianlin ;
Li, Lingyun ;
Ai, Yunfeng ;
Zhao, Bin ;
Ren, Liangcai ;
Tian, Bin .
2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, :5086-5091
[47]   ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning [J].
Dudziak, Lukasz ;
Abdelfattah, Mohamed S. ;
Vipperla, Ravichander ;
Laskaridis, Stefanos ;
Lane, Nicholas D. .
INTERSPEECH 2019, 2019, :2235-2239
[48]   End-to-End Streaming Video Temporal Action Segmentation With Reinforcement Learning [J].
Zhang, Jin-Rong ;
Wen, Wu-Jun ;
Liu, Sheng-Lan ;
Huang, Gao ;
Li, Yun-Heng ;
Li, Qi-Feng ;
Feng, Lin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025,
[49]   End-to-end Reinforcement Learning for Time-Optimal Quadcopter Flight [J].
Ferede, Robin ;
De Wagter, Christophe ;
Izzo, Dario ;
de Croon, Guido C. H. E. .
2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, :6172-6177
[50]   End-to-End Reinforcement Learning for Torque Based Variable Height Hopping [J].
Soni, Raghav ;
Harnack, Daniel ;
Isermann, Hauke ;
Fushimi, Sotaro ;
Kumar, Shivesh ;
Kirchner, Frank .
2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, :7531-7538