Lightweight Provenance Service for High-Performance Computing

被引:11
|
作者
Dai, Dong [1 ]
Chen, Yong [1 ]
Carns, Philip [2 ]
Jenkins, John [2 ]
Ross, Robert [2 ]
机构
[1] Texas Tech Univ, Comp Sci Dept, Lubbock, TX 79409 USA
[2] Argonne Natl Lab, Math & Comp Sci Div, Argonne, IL 60439 USA
来源
2017 26TH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT) | 2017年
基金
美国国家科学基金会;
关键词
TIME;
D O I
10.1109/PACT.2017.14
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.
引用
收藏
页码:117 / 129
页数:13
相关论文
共 50 条
  • [41] High-performance reconfigurable computing
    Buell, Duncan
    El-Ghazawi, Tarek
    Gaj, Kris
    Kindratenko, Volodymyr
    COMPUTER, 2007, 40 (03) : 23 - 27
  • [42] THE HIGH-PERFORMANCE COMPUTING INITIATIVE
    BROWN, GE
    PHOTONICS SPECTRA, 1991, 25 (07) : 79 - 80
  • [43] High-performance computing - an overview
    Vienna Univ, Vienna, Austria
    Comput Phys Commun, 1-2 (16-35):
  • [44] Thoughts on high-performance computing
    Xuejun Yang
    National Science Review, 2014, 1 (03) : 332 - 333
  • [45] HIGH-PERFORMANCE COMPUTING AND COMMUNICATIONS
    STEVENS, R
    FUTURE GENERATION COMPUTER SYSTEMS, 1994, 10 (2-3) : 159 - 167
  • [46] Challenges in High-Performance Computing
    Navaux P.O.A.
    Lorenzon A.F.
    Serpa M.S.
    Journal of the Brazilian Computer Society, 2023, 29 (01) : 51 - 62
  • [47] HIGH-PERFORMANCE COMPUTING AND PHYSICS
    ORSZAG, SA
    ZABUSKY, NJ
    PHYSICS TODAY, 1993, 46 (03) : 22 - 23
  • [48] High-Performance Computing in Edge Computing Networks
    Tu, Wanqing
    Pop, Florin
    Jia, Weijia
    Wu, Jie
    Iacono, Mauro
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2019, 123 : 230 - 230
  • [49] Effective Quality-of-Service Policy for Capacity High-Performance Computing Systems
    Jokanovic, Ana
    Carlos Sancho, Jose
    Labarta, Jesus
    Rodriguez, German
    Minkenberg, Cyriel
    2012 IEEE 14TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS & 2012 IEEE 9TH INTERNATIONAL CONFERENCE ON EMBEDDED SOFTWARE AND SYSTEMS (HPCC-ICESS), 2012, : 598 - 607
  • [50] Bridging the Gap between High-Performance, Cloud and Service-Oriented Computing
    Ditter, Alexander
    Tielemann, Michael
    Fey, Dietmar
    2019 IEEE 4TH INTERNATIONAL WORKSHOPS ON FOUNDATIONS AND APPLICATIONS OF SELF* SYSTEMS (FAS*W 2019), 2019, : 68 - 69