Functional Parallelism with Shared Memory and Distributed Memory Approaches

被引:0
作者
Kandegedara, Mahesh [1 ]
Ranasinghe, D. N. [1 ]
机构
[1] Univ Colombo, Sch Comp, Colombo, Sri Lanka
来源
IEEE REGION 10 COLLOQUIUM AND THIRD INTERNATIONAL CONFERENCE ON INDUSTRIAL AND INFORMATION SYSTEMS, VOLS 1 AND 2 | 2008年
关键词
functional; matrix multiplication; multi-threaded; multi-core; multi-processor; MPI; OpenMP; Erlang;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The recent enhancements in processor architechtures have given rise to multi-threaded, multi-core and multi-processor based clusters of high performance computing. To exploit the variety of parallelism available in these current and future computer systems, programmers must use appropriate parallel programming approaches. Though conventional programming models exist for parallel programming neither of them have sufficiently addressed the emerging processor technologies. The paper evaluates how functional programming can be used with distributed memory and shared memory languages to exploit the scalability, heterogeneity and flexibility of clusters in solving the recursive Strassen's matrix multiplication problem. The results show that the functional language Erlang is more efficient than virtual shared memory approach and can be made more scalable than distributed memory programming approaches when incorporated with OpenMP.
引用
收藏
页码:496 / 501
页数:6
相关论文
共 50 条
  • [41] OpenMP compiler for distributed memory architectures
    Jue Wang
    ChangJun Hu
    JiLin Zhang
    JianJiang Li
    Science China Information Sciences, 2010, 53 : 932 - 944
  • [42] OpenMP compiler for distributed memory architectures
    WANG Jue
    ScienceChina(InformationSciences), 2010, 53 (05) : 932 - 944
  • [43] Optimizing NEURON Brain Simulator with Remote Memory Access On Distributed Memory Systems
    Shehzad, Danish
    Bozkus, Zeki
    2015 INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES (ICET), 2015,
  • [44] Grouping memory consistency model for parallel-multithreaded shared-memory multiprocessor systems
    Wu, CC
    Chen, C
    INTERNATIONAL JOURNAL OF HIGH SPEED COMPUTING, 1999, 10 (01): : 53 - 81
  • [45] Redesigning MPI shared memory communication for large multi-core architecture
    Luo, Miao
    Wang, Hao
    Vienne, Jerome
    Panda, Dhabaleswar K.
    COMPUTER SCIENCE-RESEARCH AND DEVELOPMENT, 2013, 28 (2-3): : 137 - 146
  • [46] Parallel Performance Problems on Shared-Memory Multicore Systems: Taxonomy and Observation
    Atachiants, Roman
    Doherty, Gavin
    Gregg, David
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2016, 42 (08) : 764 - 785
  • [47] EXPLOITING DIRECT ACCESS SHARED MEMORY FOR MPI ON MULTI-CORE PROCESSORS
    Brightwell, Ron
    INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING APPLICATIONS, 2010, 24 (01) : 69 - 77
  • [48] Cache Coherence Protocols in Shared-Memory Multiprocessors
    Lian, Xiuzhen
    Ning, Xiaoxi
    Xie, Mingren
    Yu, Farong
    PROCEEDINGS OF THE 2015 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND ENGINEERING, 2015, 17 : 286 - 289
  • [49] A softerware monitor for shared-memory multiprocessor computers
    Liu, X
    Xu, FL
    SOFTWARE-PRACTICE & EXPERIENCE, 2004, 34 (08) : 757 - 776
  • [50] To Share or Not to Share: A Case for MPI in Shared-Memory
    Adam, Julien
    Besnard, Jean-Baptiste
    Roussel, Adrien
    Jaeger, Julien
    Carribault, Patrick
    Perache, Marc
    RECENT ADVANCES IN THE MESSAGE PASSING INTERFACE, EUROMPI 2024, 2025, 15267 : 89 - 102