Towards Effective Author Name Disambiguation by Hybrid Attention

被引:1
作者
Zhou, Qian [1 ]
Chen, Wei [1 ]
Zhao, Peng-Peng [1 ]
Liu, An [1 ]
Xu, Jia-Jie [1 ]
Qu, Jian-Feng [1 ]
Zhao, Lei [1 ]
机构
[1] Soochow Univ, Sch Comp Sci & Technol, Suzhou 215006, Peoples R China
基金
中国国家自然科学基金;
关键词
author name disambiguation; multiple-feature information; hybrid attention; pruning strategy; structural information loss of vector space;
D O I
10.1007/s11390-023-2070-z
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Author name disambiguation (AND) is a central task in academic search, which has received more attention recently accompanied by the increase of authors and academic publications. To tackle the AND problem, existing studies have proposed various approaches based on different types of information, such as raw document features (e.g., co-authors, titles, and keywords), the fusion feature (e.g., a hybrid publication embedding based on multiple raw document features), the local structural information (e.g., a publication's neighborhood information on a graph), and the global structural information (e.g., interactive information between a node and others on a graph). However, there has been no work taking all the above-mentioned information into account and taking full advantage of the contributions of each raw document feature for the AND problem so far. To fill the gap, we propose a novel framework named EAND (Towards Effective Author Name Disambiguation by Hybrid Attention). Specifically, we design a novel feature extraction model, which consists of three hybrid attention mechanism layers, to extract key information from the global structural information and the local structural information that are generated from six similarity graphs constructed based on different similarity coefficients, raw document features, and the fusion feature. Each hybrid attention mechanism layer contains three key modules: a local structural perception, a global structural perception, and a feature extractor. Additionally, the mean absolute error function in the joint loss function is used to introduce the structural information loss of the vector space. Experimental results on two real-world datasets demonstrate that EAND achieves superior performance, outperforming state-of-the-art methods by at least +2.74% in terms of the micro-F1 score and +3.31% in terms of the macro-F1 score.
引用
收藏
页码:929 / 950
页数:22
相关论文
共 32 条
  • [31] Name Disambiguation in AMiner: Clustering, Maintenance, and Human in the Loop
    Zhang, Yutao
    Zhang, Fanjin
    Yao, Peiran
    Tang, Jie
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 1002 - 1011
  • [32] Multiple Features Driven Author Name Disambiguation
    Zhou, Qian
    Chen, Wei
    Wang, Weiqing
    Xu, Jiajie
    Zhao, Lei
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON WEB SERVICES, ICWS 2021, 2021, : 506 - 515