Lightweight Provenance Service for High-Performance Computing

被引:11
|
作者
Dai, Dong [1 ]
Chen, Yong [1 ]
Carns, Philip [2 ]
Jenkins, John [2 ]
Ross, Robert [2 ]
机构
[1] Texas Tech Univ, Comp Sci Dept, Lubbock, TX 79409 USA
[2] Argonne Natl Lab, Math & Comp Sci Div, Argonne, IL 60439 USA
来源
2017 26TH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT) | 2017年
基金
美国国家科学基金会;
关键词
TIME;
D O I
10.1109/PACT.2017.14
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.
引用
收藏
页码:117 / 129
页数:13
相关论文
共 50 条
  • [21] A high-performance communication service for parallel computing on distributed DSP systems
    Lohout, J
    George, AD
    PARALLEL COMPUTING, 2003, 29 (07) : 851 - 878
  • [22] TRENDS IN HIGH-PERFORMANCE COMPUTING
    Kindratenko, Volodymyr
    Trancoso, Pedro
    COMPUTING IN SCIENCE & ENGINEERING, 2011, 13 (03) : 92 - 95
  • [23] High-performance throughput computing
    Chaudhry, S
    Caprioli, P
    Yip, S
    Tremblay, M
    IEEE MICRO, 2005, 25 (03) : 32 - 45
  • [24] Java in high-performance computing
    Getov, V.
    Future Generation Computer Systems, 2001, 18 (02)
  • [25] HIGH-PERFORMANCE COMPUTING AND NETWORKING
    GENTZSCH, W
    FUTURE GENERATION COMPUTER SYSTEMS, 1995, 11 (4-5) : 347 - 349
  • [26] High-performance computing today
    Dongarra, J
    Meuer, H
    Simon, H
    Strohmaier, E
    FOUNDATIONS OF MOLECULAR MODELING AND SIMULATION, 2001, 97 (325): : 96 - 100
  • [27] High-performance computing in industry
    Strohmaier, E
    Dongarra, JJ
    Meuer, HW
    Simon, HD
    SUPERCOMPUTER, 1997, 13 (01): : 74 - 88
  • [28] High-performance computing for vision
    Wang, CL
    Bhat, PB
    Prasanna, VK
    PROCEEDINGS OF THE IEEE, 1996, 84 (07) : 931 - 946
  • [29] Trends in high-performance computing
    Dongarra, J
    IEEE CIRCUITS & DEVICES, 2006, 22 (01): : 22 - 27
  • [30] Thoughts on high-performance computing
    Yang, Xuejun
    NATIONAL SCIENCE REVIEW, 2014, 1 (03) : 332 - 333