Energy Efficient Runtime Framework for Exascale Systems

被引:0
作者
Mhedheb, Yousri [1 ]
Streit, Achim [1 ]
机构
[1] Karlsruhe Inst Technol, Steinbuch Ctr Comp, Karlsruhe, Germany
来源
HIGH PERFORMANCE COMPUTING, ISC HIGH PERFORMANCE 2016 INTERNATIONAL WORKSHOPS | 2016年 / 9945卷
关键词
Exascale; Energy efficiency; Data locality; PGAS; Runtime system; MPI;
D O I
10.1007/978-3-319-46079-6_3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Building an Exascale computer that solves scientific problems by three orders of magnitude faster as the current Petascale systems is harder than just making it huge. Towards the first Exascale computer, energy consumption has been emerged to a crucial factor. Every component will have to change to create an Exascale syestem, which capable of a million trillion of computing per second. To run efficiently on these huge systems and to take advantages of every computational power, software and underlying algorithms should be rewritten. While many computing intensive applications are designed to use Message Passing Interface (MPI) with two-sided communication semantics, a Partitioned Global Address Space (PGAS) is being designed, through providing an abstraction of the global address space, to treat a distributed system as if the memory were shared. The data locality and communication could be optimized through the one sided communication offered by PGAS. In this paper we present an energy aware runtime framework, which is PGAS based and offers MPI as a substrate communication layer.
引用
收藏
页码:32 / 44
页数:13
相关论文
共 13 条
  • [1] A Runtime Framework for Energy Efficient HPC Systems Without a Priori Knowledge of Applications
    Chetsa, Ghislain Landry Tsafack
    Lefevre, Laurent
    Pierson, Jean-Marc
    Stolf, Patricia
    Da Costa, Georges
    [J]. PROCEEDINGS OF THE 2012 IEEE 18TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS 2012), 2012, : 660 - 667
  • [2] Daily J, 2014, INT C HIGH PERFORM
  • [3] Supporting the Global Arrays PGAS Model Using MPI One-Sided Communication
    Dinan, James
    Balaji, Pavan
    Hammond, Jeff R.
    Krishnamoorthy, Sriram
    Tipparaju, Vinod
    [J]. 2012 IEEE 26TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2012, : 739 - 750
  • [4] ElGhazawi T, 2005, WILEY SER PARA DIST, P1, DOI 10.1002/0471478369
  • [5] Fürlinger K, 2014, LECT NOTES COMPUT SC, V8806, P542, DOI 10.1007/978-3-319-14313-2_46
  • [6] Kandalla Krishna, 2010, Proceedings 39th International Conference on Parallel Processing (ICPP 2010), P218, DOI 10.1109/ICPP.2010.78
  • [7] Performance comparison of MPI and OpenMP on shared memory multiprocessors
    Krawezik, G
    Cappello, F
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2006, 18 (01) : 29 - 61
  • [8] Mametjanov A., 2013, SUPER COMPUTING SC13
  • [9] Mhedheb Y., 2014, ENERGY EFFICIENT TAS
  • [10] Designing energy efficient communication runtime systems: a view from PGAS models
    Vishnu, Abhinav
    Song, Shuaiwen
    Marquez, Andres
    Barker, Kevin
    Kerbyson, Darren
    Cameron, Kirk
    Balaji, Pavan
    [J]. JOURNAL OF SUPERCOMPUTING, 2013, 63 (03) : 691 - 709