共 31 条
[1]
Thornton C., Hutter F., Hoos H., Et al., Auto-weka: combined selection and hyperparameter optimization of classification algorithms, Proc 19th ACM SIGKDD Int Conf Knowled Discov Data Min, pp. 847-855, (2013)
[2]
DeCastro-Garcia N., Munoz Castaneda A.L., Escudero Garcia D., Et al., Effect of the sampling of a dataset in the hyperparameter optimization phase over the efficiency of a machine learning algorithm, Complexity, 1-16, (2019)
[3]
Elshawi R., Maher M., Sakr S., Automated Machine Learning: State-Of-The-Art and Open Challenges, (2019)
[4]
Bergstra J., Bengio Y., Random search for hyper-parameter optimization, J Mach Learn Res, 13, 1, pp. 281-305, (2012)
[5]
Snoek J., Larochelle H., Adams R.P., Practical Bayesian Optimization of Machine Learning Algorithms, (2012)
[6]
Chapelle O., Vapnik V., Bousquet O., Et al., Choosing multiple parameters for support vector machines, Mach Learn, 46, 1-3, pp. 131-159, (2002)
[7]
Friedman J.H., Greedy function approximation: A gradient boosting machine, Ann Stat, 29, 5, pp. 1189-1232, (2001)
[8]
Geurts P., Ernst D., Wehenkel L., Extremely randomized trees, Mach Learn, 63, 1, pp. 3-42, (2006)
[9]
Zhang S., Li X., Zong M., Et al., Efficient knn classification with different numbers of nearest neighbors, IEEE Trans Neural Netw Learn Syst, 29, 5, pp. 1774-1785, (2017)
[10]
Reif M., Shafait F., Dengel A., Prediction of classifier training time including parameter optimization, Annual Conf Artifi Intell Springer, 7006, pp. 260-271, (2011)