Neural Acceleration of Graph Based Utility Functions for Sparse Matrices

被引:0
|
作者
Booth, Joshua Dennis [1 ]
Bolet, Gregory S. [2 ]
机构
[1] Univ Alabama Huntsville, Dept Comp Sci, Huntsville, AL 35899 USA
[2] Virginia Tech, Blacksburg, VA 24061 USA
基金
美国国家科学基金会;
关键词
Sparse matrices; Approximation algorithms; Hardware; Computational modeling; Partitioning algorithms; Runtime; Artificial neural networks; Approximate computing; high performance computing; linear algebra; sparse matrices; machine learning algorithms; neural networks; ALGORITHM;
D O I
10.1109/ACCESS.2023.3262453
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many graph-based algorithms in high performance computing (HPC) use approximate solutions due to having algorithms that are computationally expensive or serial in nature. Neural acceleration, i.e., the process of speeding up approximation computation elements via artificial neural networks, is relatively new and has not focused on HPC graph-based algorithms. In this paper, we propose a starting point for applying models for neural acceleration to graph-based HPC algorithms utilizing an understanding of the connectivity computational pattern, recursive neural networks, and graph neural networks. We demonstrate these techniques on the problem related to the utility functions of sparse matrix ordering and fill-in (i.e., zero elements becoming nonzero during factorization) calculations. The problem of sparse matrix ordering is commonly used for issues related to load balancing, improving memory reuse, or reducing computational and memory costs in direct sparse linear solver methods. These utility functions are ideal for demonstration as they comprise a number of different graph-based subproblems, and thus demonstrate the usefulness of our method over a wide range. We show that we can accurately approximate the best ordering and the nonzero count of the sparse factorization matrix while speeding up the calculation by as much as 30.3x over the traditional serial method.
引用
收藏
页码:31619 / 31635
页数:17
相关论文
共 50 条
  • [1] Deep Neural Network Acceleration With Sparse Prediction Layers
    Yao, Zhongtian
    Huang, Kejie
    Shen, Haibin
    Ming, Zhaoyan
    IEEE ACCESS, 2020, 8 (08): : 6839 - 6848
  • [2] SCV-GNN: Sparse Compressed Vector-Based Graph Neural Network Aggregation
    Unnikrishnan, Nanda K.
    Gould, Joe
    Parhi, Keshab K.
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (12) : 4803 - 4816
  • [3] Hardware Acceleration of Large-Scale CMOS Invertible Logic Based on Sparse Hamiltonian Matrices
    Onizawa, Naoya
    Tamakoshi, Akira
    Hanyu, Takahiro
    IEEE OPEN JOURNAL OF CIRCUITS AND SYSTEMS, 2021, 2 : 782 - 791
  • [4] FUNCTIONS PRESERVING POSITIVE DEFINITENESS FOR SPARSE MATRICES
    Guillot, Dominique
    Rajaratnam, Bala
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 2015, 367 (01) : 627 - 649
  • [5] A Neural Network-Assisted Denoiser for Sparse Signals With Low Rank Property of Transformed Hankel Matrices
    Cho, Wanjei
    Kim, Seong-Cheol
    Lee, Woong-Hee
    IEEE ACCESS, 2024, 12 : 192990 - 193000
  • [6] Sparse Graph Neural Networks with Scikit-Network
    Delarue, Simon
    Bonald, Thomas
    COMPLEX NETWORKS & THEIR APPLICATIONS XII, VOL 1, COMPLEX NETWORKS 2023, 2024, 1141 : 16 - 24
  • [7] MergePath-SpMM: Parallel Sparse Matrix-Matrix Algorithm for Graph Neural Network Acceleration
    Shan, Mohsin
    Gurevin, Deniz
    Nye, Jared
    Ding, Caiwen
    Khan, Omer
    2023 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE, ISPASS, 2023, : 145 - 156
  • [8] Higher-Order GNNs Meet Efficiency: Sparse Sobolev Graph Neural Networks
    Giraldo, Jhony H.
    Einizade, Aref
    Todorovic, Andjela
    Castro-Correa, Jhon A.
    Badiey, Mohsen
    Bouwmans, Thierry
    Malliaros, Fragkiskos D.
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2025, 11 : 11 - 22
  • [9] Superpixel-Based Graph Laplacian Regularization for Sparse Hyperspectral Unmixing
    Ince, Taner
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [10] RadiX-Net: Structured Sparse Matrices for Deep Neural Networks
    Robinett, Ryan A.
    Kepner, Jeremy
    2019 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2019, : 268 - 274