Towards Effective Author Name Disambiguation by Hybrid Attention

被引:1
作者
Zhou, Qian [1 ]
Chen, Wei [1 ]
Zhao, Peng-Peng [1 ]
Liu, An [1 ]
Xu, Jia-Jie [1 ]
Qu, Jian-Feng [1 ]
Zhao, Lei [1 ]
机构
[1] Soochow Univ, Sch Comp Sci & Technol, Suzhou 215006, Peoples R China
基金
中国国家自然科学基金;
关键词
author name disambiguation; multiple-feature information; hybrid attention; pruning strategy; structural information loss of vector space;
D O I
10.1007/s11390-023-2070-z
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Author name disambiguation (AND) is a central task in academic search, which has received more attention recently accompanied by the increase of authors and academic publications. To tackle the AND problem, existing studies have proposed various approaches based on different types of information, such as raw document features (e.g., co-authors, titles, and keywords), the fusion feature (e.g., a hybrid publication embedding based on multiple raw document features), the local structural information (e.g., a publication's neighborhood information on a graph), and the global structural information (e.g., interactive information between a node and others on a graph). However, there has been no work taking all the above-mentioned information into account and taking full advantage of the contributions of each raw document feature for the AND problem so far. To fill the gap, we propose a novel framework named EAND (Towards Effective Author Name Disambiguation by Hybrid Attention). Specifically, we design a novel feature extraction model, which consists of three hybrid attention mechanism layers, to extract key information from the global structural information and the local structural information that are generated from six similarity graphs constructed based on different similarity coefficients, raw document features, and the fusion feature. Each hybrid attention mechanism layer contains three key modules: a local structural perception, a global structural perception, and a feature extractor. Additionally, the mean absolute error function in the joint loss function is used to introduce the structural information loss of the vector space. Experimental results on two real-world datasets demonstrate that EAND achieves superior performance, outperforming state-of-the-art methods by at least +2.74% in terms of the micro-F1 score and +3.31% in terms of the macro-F1 score.
引用
收藏
页码:929 / 950
页数:22
相关论文
共 32 条
  • [21] A Graph Combination With Edge Pruning-Based Approach for Author Name Disambiguation
    Pooja, K. M.
    Mondal, Samrat
    Chandra, Joydeep
    [J]. JOURNAL OF THE ASSOCIATION FOR INFORMATION SCIENCE AND TECHNOLOGY, 2020, 71 (01) : 69 - 83
  • [22] On the combination of domain-specific heuristics for author name disambiguation: the nearest cluster method
    Santana A.F.
    Gonçalves M.A.
    Laender A.H.F.
    Ferreira A.A.
    [J]. International Journal on Digital Libraries, 2015, 16 (3-4) : 229 - 246
  • [23] Pairwise Learning for Name Disambiguation in Large-Scale Heterogeneous Academic Networks
    Sun, Qingyun
    Peng, Hao
    Li, Jianxin
    Wang, Senzhang
    Dong, Xiangyu
    Zhao, Liangxuan
    Yu, Philip S.
    He, Lifang
    [J]. 20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 511 - 520
  • [24] Tang J, 2008, P ACM SIGKDD INT C K, ppp990, DOI DOI 10.1145/1401890.1402008
  • [25] A Unified Probabilistic Framework for Name Disambiguation in Digital Library
    Tang, Jie
    Fong, A. C. M.
    Wang, Bo
    Zhang, Jing
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2012, 24 (06) : 975 - 987
  • [26] Wang HW, 2020, AAAI CONF ARTIF INTE, V34, P238
  • [27] Xiao ZY, 2020, Arxiv, DOI arXiv:2007.02086
  • [28] Yin Xiaoxin., 2007, DATA ENG, P1242
  • [29] Yoshida M, 2010, SIGIR 2010: PROCEEDINGS OF THE 33RD ANNUAL INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH DEVELOPMENT IN INFORMATION RETRIEVAL, P10
  • [30] Name Disambiguation in Anonymized Graphs using Network Embedding
    Zhang, Baichuan
    Al Hasan, Mohammad
    [J]. CIKM'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2017, : 1239 - 1248