Hierarchical Collective I/O Scheduling for High-Performance Computing

被引:9
作者
Liu, Jialin [1 ]
Zhuang, Yu [1 ]
Chen, Yong [1 ]
机构
[1] Texas Tech Univ, Dept Comp Sci, Lubbock, TX 79409 USA
基金
美国国家科学基金会;
关键词
Collective I/O; Scheduling; High-performance computing; Big data; Data intensive computing;
D O I
10.1016/j.bdr.2015.01.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The non-contiguous access pattern of many scientific applications results in a large number of I/O requests, which can seriously limit the data-access performance. Collective I/O has been widely used to address this issue. However, the performance of collective I/O could be dramatically degraded in today's high-performance computing systems due to the increasing shuffle cost caused by highly concurrent data accesses. This situation tends to be even worse as many applications become more and more data intensive. Previous research has primarily focused on optimizing I/O access cost in collective I/O but largely ignored the shuffle cost involved. Previous works assume that the lowest average response time leads to the best QoS and performance, while that is not always true for collective requests when considering the additional shuffle cost. In this study, we propose a new hierarchical I/O scheduling (HIO) algorithm to address the increasing shuffle cost in collective I/O. The fundamental idea is to schedule applications' I/O requests based on a shuffle cost analysis to achieve the optimal overall performance, instead of achieving optimal I/O accesses only. The algorithm is currently evaluated with the MPICH3 and PVFS2. Both theoretical analysis and experimental tests show that the proposed hierarchical I/O scheduling has a potential in addressing the degraded performance issue of collective I/O with highly concurrent accesses. (C) 2015 Elsevier Inc. All rights reserved.
引用
收藏
页码:117 / 126
页数:10
相关论文
共 40 条
[1]  
Aibek Musaev C.P., 2014, 11 INT C INF SYST CR
[2]   Reducing communication costs in collective I/O in multi-core cluster systems with non-exclusive scheduling [J].
Cha, Kwangho ;
Maeng, Seungryoul .
JOURNAL OF SUPERCOMPUTING, 2012, 61 (03) :966-996
[3]   Automatically Selecting the Number of Aggregators for Collective I/O Operations [J].
Chaarawi, Mohamad ;
Gabriel, Edgar .
2011 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER), 2011, :428-437
[4]  
Chou J., 2011, C HIGH PERF COMP NET
[5]   Evaluation of collective I/O implementations on parallel architectures [J].
Dickens, PM ;
Thakur, R .
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2001, 61 (08) :1052-1076
[6]   Improving collective I/O performance using threads [J].
Dickens, PM ;
Thakur, R .
IPPS/SPDP 1999: 13TH INTERNATIONAL PARALLEL PROCESSING SYMPOSIUM & 10TH SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING, PROCEEDINGS, 1999, :38-45
[7]   Exploiting data compression in collective I/O techniques [J].
Filgueira, Rosa ;
Singh, David E. ;
Pichel, Juan C. ;
Carretero, Jesus .
2008 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING, 2008, :479-485
[8]  
Gao K., 2009, 2009 IEEE INT C CLUS, P1
[9]  
GROPP W, 2000, USING MPI 2 ADV FEAT
[10]  
Gulati A., 2009, FAST, V9, P85