Iterative Sparse Matrix-Vector Multiplication on In-Memory Cluster Computing Accelerated by GPUs for Big Data

被引:0
|
作者
Peng, Jiwu [1 ,2 ]
Xiao, Zheng [1 ,2 ]
Chen, Cen [1 ,2 ]
Yang, Wangdong [1 ,2 ]
机构
[1] Hunan Univ, Coll Informat Sci & Engn, Changsha 410082, Hunan, Peoples R China
[2] Natl Supercomp Ctr Changsha, Changsha 410082, Hunan, Peoples R China
来源
2016 12TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (ICNC-FSKD) | 2016年
关键词
Iterative SpMV; Flink; GPU; In-memory Computing; BigData;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Iterative SpMV (ISpMV) is a key operation in many graph-based data mining algorithms and machine learning algorithms. Along with the development of big data, the matrices can be so large, perhaps billion-scale, that the SpMV can not be implemented in a single computer. Therefore, it is a challenging issue to implement and optimize SpMV for large-scale data sets. In this paper, we used an in-memory heterogeneous CPU-GPU cluster computing platforms (IMHCPs) to efficiently solve billion-scale SpMV problem. A dedicated and efficient hierarchy partitioning strategy for sparse matrices and the vector is proposed. The partitioning strategy contains partitioning sparse matrices among workers in the cluster and among GPUs in one worker. More, the performance of the IMHCPs-based SpMV is evaluated from the aspects of computation efficiency and scalability.
引用
收藏
页码:1454 / 1460
页数:7
相关论文
共 50 条
  • [31] Auto-Tuning of Thread Assignment for Matrix-Vector Multiplication on GPUs
    Wang, Jinwei
    Ma, Xirong
    Zhu, Yuanping
    Sun, Jizhou
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2013, E96D (11): : 2319 - 2326
  • [32] Fast Implementation of General Matrix-Vector Multiplication (GEMV) on Kepler GPUs
    Mukunoki, Daichi
    Imamura, Toshiyuki
    Takahashi, Daisuke
    23RD EUROMICRO INTERNATIONAL CONFERENCE ON PARALLEL, DISTRIBUTED, AND NETWORK-BASED PROCESSING (PDP 2015), 2015, : 642 - 650
  • [33] A segment-based sparse matrix-vector multiplication on CUDA
    Feng, Xiaowen
    Jin, Hai
    Zheng, Ran
    Shao, Zhiyuan
    Zhu, Lei
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2014, 26 (01) : 271 - 286
  • [34] On Implementing Sparse Matrix Multi-Vector Multiplication on GPUs
    Abu-Sufah, Walid
    Ahmad, Khalid
    2014 IEEE INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS, 2014 IEEE 6TH INTL SYMP ON CYBERSPACE SAFETY AND SECURITY, 2014 IEEE 11TH INTL CONF ON EMBEDDED SOFTWARE AND SYST (HPCC,CSS,ICESS), 2014, : 1117 - 1124
  • [35] An efficient SIMD compression format for sparse matrix-vector multiplication
    Chen, Xinhai
    Xie, Peizhen
    Chi, Lihua
    Liu, Jie
    Gong, Chunye
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2018, 30 (23)
  • [36] Merge-based Parallel Sparse Matrix-Vector Multiplication
    Merrill, Duane
    Garland, Michael
    SC '16: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2016, : 678 - 689
  • [37] Recursive Hybrid Compression for Sparse Matrix-Vector Multiplication on GPU
    Zhao, Zhixiang
    Wu, Yanxia
    Zhang, Guoyin
    Yang, Yiqing
    Hong, Ruize
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2025, 37 (4-5)
  • [38] Model-driven Autotuning of Sparse Matrix-Vector Multiply on GPUs
    Choi, Jee W.
    Singh, Amik
    Vuduc, Richard W.
    ACM SIGPLAN NOTICES, 2010, 45 (05) : 115 - 125
  • [39] Model-driven Autotuning of Sparse Matrix-Vector Multiply on GPUs
    Choi, Jee W.
    Singh, Amik
    Vuduc, Richard W.
    PPOPP 2010: PROCEEDINGS OF THE 2010 ACM SIGPLAN SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING, 2010, : 115 - 125
  • [40] Heterogeneous sparse matrix-vector multiplication via compressed sparse row format
    Lane, Phillip Allen
    Booth, Joshua Dennis
    PARALLEL COMPUTING, 2023, 115