Functional Parallelism with Shared Memory and Distributed Memory Approaches

被引:0
作者
Kandegedara, Mahesh [1 ]
Ranasinghe, D. N. [1 ]
机构
[1] Univ Colombo, Sch Comp, Colombo, Sri Lanka
来源
IEEE REGION 10 COLLOQUIUM AND THIRD INTERNATIONAL CONFERENCE ON INDUSTRIAL AND INFORMATION SYSTEMS, VOLS 1 AND 2 | 2008年
关键词
functional; matrix multiplication; multi-threaded; multi-core; multi-processor; MPI; OpenMP; Erlang;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The recent enhancements in processor architechtures have given rise to multi-threaded, multi-core and multi-processor based clusters of high performance computing. To exploit the variety of parallelism available in these current and future computer systems, programmers must use appropriate parallel programming approaches. Though conventional programming models exist for parallel programming neither of them have sufficiently addressed the emerging processor technologies. The paper evaluates how functional programming can be used with distributed memory and shared memory languages to exploit the scalability, heterogeneity and flexibility of clusters in solving the recursive Strassen's matrix multiplication problem. The results show that the functional language Erlang is more efficient than virtual shared memory approach and can be made more scalable than distributed memory programming approaches when incorporated with OpenMP.
引用
收藏
页码:496 / 501
页数:6
相关论文
共 50 条
  • [31] The block distributed memory model
    JaJa, JF
    Ryu, KW
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 1996, 7 (08) : 830 - 840
  • [32] High order accurate simulation of compressible flows on GPU clusters over Software Distributed Shared Memory
    Karantasis, Konstantinos I.
    Polychronopoulos, Eleftherios D.
    Ekaterinaris, John A.
    COMPUTERS & FLUIDS, 2014, 93 : 18 - 29
  • [33] Shared-memory Graph Truss Decomposition
    Kabir, Humayun
    Madduri, Kamesh
    2017 IEEE 24TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING (HIPC), 2017, : 13 - 22
  • [34] A parallel shared memory simulator for command and control
    Jaillet, C
    Krajecki, M
    Fugère, J
    16TH ANNUAL INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTING SYSTEMS AND APPLICATIONS, PROCEEDINGS, 2002, : 237 - 242
  • [35] Parallelization and optimization of Mfold on shared memory system
    Miao, Qiankun
    Sun, Guangzhong
    Shan, Jiulong
    Chen, Guoliang
    PARALLEL COMPUTING, 2010, 36 (09) : 487 - 494
  • [36] COMIC: A Coherent Shared Memory Interface for Cell BE
    Lee, Jaejin
    Seo, Sangmin
    Kim, Chihun
    Kim, Junghyun
    Chun, Posung
    Sura, Zehra
    Kim, Jungwon
    Han, SangYong
    PACT'08: PROCEEDINGS OF THE SEVENTEENTH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES, 2008, : 303 - 314
  • [37] Shared Memory and GPU Parallelization of an Operational Atmospheric Transport and Dispersion Application
    Yu, Fan
    Strazdins, Peter E.
    Henrichs, Joerg
    Pugh, Tim F.
    2019 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2019, : 729 - 738
  • [38] On the Difference Between Shared Memory and Shared Address Space in HPC Communication
    Hori, Atsushi
    Ouyang, Kaiming
    Gerofi, Balazs
    Ishikawa, Yutaka
    SUPERCOMPUTING FRONTIERS, SCFA 2022, 2022, 13214 : 59 - 78
  • [39] Distributed-Memory Parallel JointNMF
    Eswar, Srinivas
    Cobb, Benjamin
    Hayashi, Koby
    Kannan, Ramakrishnan
    Ballard, Grey
    Vuduc, Richard
    Park, Haesun
    PROCEEDINGS OF THE 37TH INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, ACM ICS 2023, 2023, : 301 - 312
  • [40] OpenMP compiler for distributed memory architectures
    Wang Jue
    Hu ChangJun
    Zhang JiLin
    Li JianJiang
    SCIENCE CHINA-INFORMATION SCIENCES, 2010, 53 (05) : 932 - 944