PMLB: a large benchmark suite for machine learning evaluation and comparison

被引:219
作者
Olson, Randal S. [1 ]
La Cava, William [1 ]
Orzechowski, Patryk [1 ,2 ]
Urbanowicz, Ryan J. [1 ]
Moore, Jason H. [1 ]
机构
[1] Univ Penn, Inst Biomed Informat, 3700 Hamilton Walk, Philadelphia, PA 19104 USA
[2] AGH Univ Sci & Technol, Dept Automat & Biomed Engn, Krakow, Poland
基金
美国国家卫生研究院;
关键词
Machine learning; Model evaluation; Benchmarking; Data repository; EPISTASIS;
D O I
10.1186/s13040-017-0154-4
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Background: The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. Results: The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. Conclusions: This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
引用
收藏
页数:13
相关论文
共 26 条
[1]  
Acosta Munoz MA, 2017, MACH LEARN
[2]  
Alcala J, J MULT VALUED LOG SO
[3]  
Bache K., 2013, UCI Machine Learning Repository
[4]   Noise-tolerant learning, the parity problem, and the statistical query model [J].
Blum, A ;
Kalai, A ;
Wasserman, H .
JOURNAL OF THE ACM, 2003, 50 (04) :506-519
[5]  
Caruana R., 2006, P 23 INT C MACH LEAR, P161, DOI DOI 10.1145/1143844.1143865
[6]  
Goldbloom A, KAGGLE YOUR HOMR DAT
[7]  
Hastie T., 2009, The Elements of Statistical Learning: Data Mining, Inference and Prediction, V2, P1
[8]   Spectral biclustering of microarray data: Coclustering genes and conditions [J].
Kluger, Y ;
Basri, R ;
Chang, JT ;
Gerstein, M .
GENOME RESEARCH, 2003, 13 (04) :703-716
[9]  
KOZA JR, 1994, STAT COMPUT, V4, P87, DOI 10.1007/BF00175355
[10]   Inference of compact nonlinear dynamic models by epigenetic local search [J].
La Cava, William ;
Danai, Kourosh ;
Spector, Lee .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2016, 55 :292-306