Efficient Construction of Nonlinear Models over Normalized Data

被引:3
作者
Cheng, Zhaoyue [1 ]
Koudas, Nick [1 ]
Zhang, Zhe [2 ]
Yu, Xiaohui [2 ]
机构
[1] Univ Toronto, Toronto, ON, Canada
[2] York Univ, N York, ON, Canada
来源
2021 IEEE 37TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2021) | 2021年
基金
加拿大自然科学与工程研究理事会;
关键词
LINEAR ALGEBRA; CLASSIFICATION;
D O I
10.1109/ICDE51399.2021.00103
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine Learning (ML) applications are proliferating in the enterprise. Relational data which are prevalent in enterprise applications are typically normalized; as a result, data has to be denormalized via primary/foreign-key joins to be provided as input to ML algorithms. In this paper, we study the implementation of popular nonlinear ML models, Gaussian Mixture Models (GMM) and Neural Networks (NN), over normalized data addressing both cases of binary and multi-way joins over normalized relations. For the case of GMM, we show how it is possible to decompose computation in a systematic way both for binary joins and for multi-way joins to construct mixture models. We demonstrate that by factoring the computation, one can conduct the training of the models much faster compared to other applicable approaches, without any loss in accuracy. For the case of NN, we propose algorithms to train the network taking normalized data as the input. Similarly, we present algorithms that can conduct the training of the network in a factorized way and offer performance advantages. The redundancy introduced by denormalization can be exploited for certain types of activation functions. However, we demonstrate that attempting to explore this redundancy is helpful up to a certain point; exploring redundancy at higher layers of the network will always result in increased costs and is not recommended. We present the results of a thorough experimental evaluation, varying several parameters of the input relations involved and demonstrate that our proposals for the training of GMM and NN yield drastic performance improvements typically starting at 100%, which become increasingly higher as parameters of the underlying data vary, without any loss in accuracy.
引用
收藏
页码:1140 / 1151
页数:12
相关论文
共 32 条
[1]  
[Anonymous], 2015, PVLDB
[2]  
Apache Mahout, 2008, SCAL MACH LEARN DAT
[3]   Aggregation and Ordering in Factorised Databases [J].
Bakibayev, Nurzhan ;
Kocisky, Tomas ;
Olteanu, Dan ;
Zavodny, Jakub .
PROCEEDINGS OF THE VLDB ENDOWMENT, 2013, 6 (14) :1990-2001
[4]  
Beeri C., 1978, Proceedings of the Fourth International Conference on Very Large Data Bases, P113
[5]  
Bishop C.M., 2006, J ELECTRON IMAGING, V16, P049901, DOI DOI 10.5194/NHESS-18-2769-2018
[6]   SystemML: Declarative Machine Learning on Spark [J].
Boehm, Matthias ;
Dusenberry, Michael W. ;
Eriksson, Deron ;
Evfimievski, Alexandre V. ;
Manshadi, Faraz Makari ;
Pansare, Niketan ;
Reinwald, Berthold ;
Reiss, Frederick R. ;
Sen, Prithviraj ;
Surve, Arvind C. ;
Tatikonda, Shirish .
PROCEEDINGS OF THE VLDB ENDOWMENT, 2016, 9 (13) :1425-1436
[7]   Hybrid Parallelization Strategies for Large-Scale Machine Learning in SystemML [J].
Boehm, Matthias ;
Tatikonda, Shirish ;
Reinwald, Berthold ;
Sen, Prithviraj ;
Tian, Yuanyuan ;
Burdick, Douglas R. ;
Vaithyanathan, Shivakumar .
PROCEEDINGS OF THE VLDB ENDOWMENT, 2014, 7 (07) :553-564
[8]   A Comparison of Platforms for Implementing and Running Very Large Scale Machine Learning Algorithms [J].
Cai, Zhuhua ;
Gao, Zekai J. ;
Luo, Shangyu ;
Perez, Luis L. ;
Vagena, Zografoula ;
Jermaine, Christopher .
SIGMOD'14: PROCEEDINGS OF THE 2014 ACM SIGMOD INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2014, :1371-1382
[9]  
Cai Zhuhua, 2013, P ACM SIGMOD INT C M, P637, DOI [10.1145/2463676, DOI 10.1145/2463676.2465283]
[10]  
Cheng Z., 2019, P IEEE ICDE