Viable architectures for high-performance computing

被引:2
|
作者
Ziavras, SG [1 ]
Wang, Q
Papathanasiou, P
机构
[1] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
[2] New Jersey Inst Technol, Dept Comp Sci, Newark, NJ 07102 USA
[3] Dataline Comp Inst, Piraeus 18900, Greece
来源
COMPUTER JOURNAL | 2003年 / 46卷 / 01期
关键词
D O I
10.1093/comjnl/46.1.36
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Existing interprocessor connection networks are often plagued by poor topological properties that result in large memory latencies for distributed shared-memory (DSM) computers or multicomputers. On the other hand, scalable networks with very good topological properties are often impossible to build because of their prohibitively high very large scale integration (VLSI) (e.g. wiring) complexity. Such a network is the generalized hypercube (GH). The GH supports full connectivity of all of its nodes in each dimension and is characterized by outstanding topological properties. Also, low-dimensional GHs have very large bisection widths. We present here the class of highly-overlapping windows (HOWs) networks, which are capable of lower complexity than GHs, comparable performance and better scalability. HOWs are obtained from GHs by uniformly removing edges to produce feasible systems of lower wiring complexity. Resulting systems contain numerous highly-overlapping GHs of smaller size. The GH, the binary hypercube and the mesh all belong to this new class of interconnections. In practical cases, HOWs have higher bisection width than tori with similar node and channel costs. Also, HOWs have a very large degree of fault tolerance. This paper focuses on 2-D HOW systems. We analyze the hardware cost of HOWs, present graph embeddings and communications algorithms for HOWs, carry out performance comparisons with binary hypercubes and GHs and simulate HOWs under heavy communication loads. Our results show the suitability of HOWs for very-high-performance computing.
引用
收藏
页码:36 / 54
页数:19
相关论文
共 50 条
  • [1] High-Performance Computing System Architectures: Design and Performance
    Bagherzadeh, Nader
    Sarbazi-Azad, Hamid
    IET COMPUTERS AND DIGITAL TECHNIQUES, 2012, 6 (05): : 257 - 258
  • [2] Configurable computing: The catalyst for high-performance architectures
    Ebeling, C
    Cronquist, DC
    Franklin, P
    IEEE INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS, PROCEEDINGS, 1997, : 364 - 372
  • [3] Metalanguage for High-Performance Computing on Hybrid Architectures
    Gradvohl, A. L. S.
    IEEE LATIN AMERICA TRANSACTIONS, 2014, 12 (06) : 1162 - 1168
  • [4] DISK SYSTEM ARCHITECTURES FOR HIGH-PERFORMANCE COMPUTING
    KATZ, RH
    GIBSON, GA
    PATTERSON, DA
    PROCEEDINGS OF THE IEEE, 1989, 77 (12) : 1842 - 1858
  • [5] High-Performance Computing Applications on Novel Architectures
    Kindratenko, Volodymyr
    Thiruvathukal, George K.
    Gottlieb, Steven
    COMPUTING IN SCIENCE & ENGINEERING, 2008, 10 (06) : 13 - 15
  • [6] Architectures of high-performance VLSI for custom computing systems
    Tarasov, I. E.
    INTERNATIONAL CONFERENCE: INFORMATION TECHNOLOGIES IN BUSINESS AND INDUSTRY, 2019, 1333
  • [7] Introduction to special issue on heterogeneous architectures and high-performance computing
    Carretero, Jesus
    Garcia-Carballeira, Felix
    COMPUTERS & ELECTRICAL ENGINEERING, 2013, 39 (08) : 2551 - 2552
  • [8] Newmark local time stepping on high-performance computing architectures
    Rietmann, Max
    Grote, Marcus
    Peter, Daniel
    Schenk, Olaf
    JOURNAL OF COMPUTATIONAL PHYSICS, 2017, 334 : 308 - 326
  • [9] High performance computing architectures
    Yang, Mei
    Jiang, Yingtao
    Wang, Ling
    Yang, Yulu
    COMPUTERS & ELECTRICAL ENGINEERING, 2009, 35 (06) : 815 - 816
  • [10] Investigation of various mesh architectures with broadcast buses for high-performance computing
    Ziavras, SG
    VLSI DESIGN, 1999, 9 (01) : 29 - 54