HDMF: Hierarchical Data Modeling Framework for Modem Science Data Standards

被引:0
作者
Tritt, Andrew J. [1 ]
Rubel, Oliver [1 ]
Dichter, Benjamin [2 ]
Ly, Ryan [1 ]
Kang, Donghe [3 ]
Chang, Edward E. [5 ,6 ]
Frank, Loren M. [4 ]
Bouchard, Kristofer [2 ]
机构
[1] Lawrence Berkeley Natl Lab, Computat Res Div, Berkeley, CA 94720 USA
[2] Lawrence Berkeley Natl Lab, Biol Syst & Engn, Berkeley, CA USA
[3] Ohio State Univ, Comp Sci & Engn, Columbus, OH 43210 USA
[4] Univ Calif San Francisco, Howard Hughes Med Inst, Kavli Inst Fundamental Neurosci, Dept Physiol, San Francisco, CA USA
[5] Univ Calif San Francisco, Dept Neurol Surg, San Francisco, CA USA
[6] Univ Calif San Francisco, Ctr Integrat Neurosci, San Francisco, CA 94143 USA
来源
2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) | 2019年
基金
美国国家卫生研究院;
关键词
data standards; data modeling; data formats; HDF5; neurophysiology;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A ubiquitous problem in aggregating data across different experimental and observational data sources is a lack of software infrastructure that enables flexible and extensible standardization of data and metadata. To address this challenge, we developed HDMF, a hierarchical data modeling framework for modern science data standards. With HDMF, we separate the process of data standardization into three main components: (1) data modeling and specification, (2) data I/O and storage, and (3) data interaction and data APIs. To enable standards to support the complex requirements and varying use cases throughout the data life cycle, HDMF provides object mapping infrastructure to insulate and integrate these various components. This approach supports the flexible development of data standards and extensions, optimized storage backends, and data APIs, while allowing the other components of the data standards ecosystem to remain stable. To meet the demands of modern, large-scale science data, HDMF provides advanced data I/O functionality for iterative data write, lazy data load, and parallel I/O. It also supports optimization of data storage via support for chunking, compression, linking, and modular data storage. We demonstrate the application of HDMF in practice to design NWB 2.0 [13], a modern data standard for collaborative science across the neurophysiology community.
引用
收藏
页码:165 / 179
页数:15
相关论文
共 21 条
[1]  
[Anonymous], 2019, ZARR V 2 3 2
[2]   Optimizing I/O Performance of HPC Applications with Autotuning [J].
Behzad, Babak ;
Byna, Surendra ;
Prabhat ;
Snir, Marc .
ACM TRANSACTIONS ON PARALLEL COMPUTING, 2019, 5 (04)
[3]  
Ben-Kiki O., 2009, TECH REP, P23
[4]  
Bray Tim, 2008, Extensible Markup Language, VFifth
[5]  
Clarke JA, 2007, PROCEEDINGS OF THE HPCMP USERS GROUP CONFERENCE 2007, P322
[6]  
ISO, 2019, 8601 ISO
[7]  
JSON, 1999, JAVASCRIPT OBJ NOT
[8]   NeXus: A common format for the exchange of neutron and synchrotron data [J].
Klosowski, P ;
Koennecke, M ;
Tischler, JZ ;
Osborn, R .
PHYSICA B, 1997, 241 :151-153
[9]   The Coherent X-ray Imaging Data Bank [J].
Maia, Filipe R. N. C. .
NATURE METHODS, 2012, 9 (09) :854-855
[10]   Improving access to multi-dimensional self-describing scientific datasets [J].
Nam, B ;
Sussman, A .
CCGRID 2003: 3RD IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER COMPUTING AND THE GRID, PROCEEDINGS, 2003, :172-179