GraTO: Graph Neural Network Framework Tackling Over-smoothing with Neural Architecture Search

被引:3
作者
Feng, Xinshun [1 ]
Wan, Herun [1 ]
Feng, Shangbin [2 ]
Wang, Hongrui [1 ]
Zheng, Qinghua [1 ]
Zhou, Jun [3 ]
Luo, Minnan [1 ]
机构
[1] Xi An Jiao Tong Univ, Xian, Shaanxi, Peoples R China
[2] Univ Washington, Seattle, WA USA
[3] Ant Grp, Xian, Shaanxi, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022 | 2022年
基金
中国国家自然科学基金;
关键词
Neural Architecture Search; Over-smoothing; Neural Network;
D O I
10.1145/3511808.3557337
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Current Graph Neural Networks (GNNs) suffer from the over-smoothing problem, which results in indistinguishable node representations and low model performance with more GNN layers. Many methods have been put forward to tackle this problem in recent years. However, existing tackling over-smoothing methods emphasize model performance and neglect the over-smoothness of node representations. Additional, different approaches are applied one at a time, while there lacks an overall framework to jointly leverage multiple solutions to the over-smoothing challenge. To solve these problems, we propose GraTO, a framework based on neural architecture search to automatically search for GNNs architecture. GraTO adopts a novel loss function to facilitate striking a balance between model performance and representation smoothness. In addition to existing methods, our search space also includes DropAttribute, a novel scheme for alleviating the over-smoothing challenge, to fully leverage diverse solutions. We conduct extensive experiments on six real-world datasets to evaluate GraTo, which demonstrates that GraTo outperforms baselines in the over-smoothing metrics and achieves competitive performance in accuracy. GraTO is especially effective and robust with increasing numbers of GNN layers. Further experiments bear out the quality of node representations learned with GraTO and the effectiveness of model architecture. We make the code of GraTo available at Github (https://github.com/fxsxjtu/GraTO).
引用
收藏
页码:520 / 529
页数:10
相关论文
共 41 条
  • [1] Chen DL, 2020, AAAI CONF ARTIF INTE, V34, P3438
  • [2] Chen M, 2020, PR MACH LEARN RES, V119
  • [3] Chen T, 2020, PR MACH LEARN RES, V119
  • [4] Eksombatchai C, 2018, WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), P1775
  • [5] Feng SB, 2022, AAAI CONF ARTIF INTE, P3977
  • [6] Fout A, 2017, ADV NEUR IN, V30
  • [7] Gao Y, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P1403
  • [8] Gasteiger Johannes, 2018, INT C LEARN REPR
  • [9] Giles C. L., 1998, Digital 98 Libraries. Third ACM Conference on Digital Libraries, P89, DOI 10.1145/276675.276685
  • [10] Hamilton WL, 2017, ADV NEUR IN, V30