Iterative Sparse Matrix-Vector Multiplication on In-Memory Cluster Computing Accelerated by GPUs for Big Data

被引:0
|
作者
Peng, Jiwu [1 ,2 ]
Xiao, Zheng [1 ,2 ]
Chen, Cen [1 ,2 ]
Yang, Wangdong [1 ,2 ]
机构
[1] Hunan Univ, Coll Informat Sci & Engn, Changsha 410082, Hunan, Peoples R China
[2] Natl Supercomp Ctr Changsha, Changsha 410082, Hunan, Peoples R China
来源
2016 12TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (ICNC-FSKD) | 2016年
关键词
Iterative SpMV; Flink; GPU; In-memory Computing; BigData;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Iterative SpMV (ISpMV) is a key operation in many graph-based data mining algorithms and machine learning algorithms. Along with the development of big data, the matrices can be so large, perhaps billion-scale, that the SpMV can not be implemented in a single computer. Therefore, it is a challenging issue to implement and optimize SpMV for large-scale data sets. In this paper, we used an in-memory heterogeneous CPU-GPU cluster computing platforms (IMHCPs) to efficiently solve billion-scale SpMV problem. A dedicated and efficient hierarchy partitioning strategy for sparse matrices and the vector is proposed. The partitioning strategy contains partitioning sparse matrices among workers in the cluster and among GPUs in one worker. More, the performance of the IMHCPs-based SpMV is evaluated from the aspects of computation efficiency and scalability.
引用
收藏
页码:1454 / 1460
页数:7
相关论文
共 50 条
  • [21] A model-driven blocking strategy for load balanced sparse matrix-vector multiplication on GPUs
    Ashari, Arash
    Sedaghati, Naser
    Eisenlohr, John
    Sadayappan, P.
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2015, 76 : 3 - 15
  • [22] Block-wise dynamic mixed-precision for sparse matrix-vector multiplication on GPUs
    Zhao, Zhixiang
    Zhang, Guoyin
    Wu, Yanxia
    Hong, Ruize
    Yang, Yiqing
    Fu, Yan
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (10) : 13681 - 13713
  • [23] Accelerating Sparse Matrix-Vector Multiplication on GPUs using Bit-Representation-Optimized Schemes
    Tang, Wai Teng
    Tan, Wen Jun
    Ray, Rajarshi
    Wong, Yi Wen
    Chen, Weiguang
    Kuo, Shyh-hao
    Goh, Rick Siow Mong
    Turner, Stephen John
    Wong, Weng-Fai
    2013 INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS (SC), 2013,
  • [24] Smarter Traffic Prediction Using Big Data, In-Memory Computing, Deep Learning and GPUs
    Aqib, Muhammad
    Mehmood, Rashid
    Alzahrani, Ahmed
    Katib, Iyad
    Albeshri, Aiiad
    Altowaijri, Saleh M.
    SENSORS, 2019, 19 (09)
  • [25] Digital in-memory stochastic computing architecture for vector-matrix multiplication
    Agwa, Shady
    Prodromakis, Themis
    FRONTIERS IN NANOTECHNOLOGY, 2023, 5
  • [26] Adaptive sparse matrix representation for efficient matrix-vector multiplication
    Zardoshti, Pantea
    Khunjush, Farshad
    Sarbazi-Azad, Hamid
    JOURNAL OF SUPERCOMPUTING, 2016, 72 (09) : 3366 - 3386
  • [27] Adaptive diagonal sparse matrix-vector multiplication on GPU
    Gao, Jiaquan
    Xia, Yifei
    Yin, Renjie
    He, Guixia
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2021, 157 : 287 - 302
  • [28] Parallel Sparse Matrix-Vector Multiplication Using Accelerators
    Maeda, Hiroshi
    Takahashi, Daisuke
    COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2016, PT II, 2016, 9787 : 3 - 18
  • [29] Implementing Sparse Matrix-Vector Multiplication with QCSR on GPU
    Zhang, Jilin
    Liu, Enyi
    Wan, Jian
    Ren, Yongjian
    Yue, Miao
    Wang, Jue
    APPLIED MATHEMATICS & INFORMATION SCIENCES, 2013, 7 (02): : 473 - 482
  • [30] A New Method of Sparse Matrix-Vector Multiplication on GPU
    Huan, Gao
    Qian, Zhang
    PROCEEDINGS OF 2012 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT 2012), 2012, : 954 - 958