Learning from All Types of Experiences: A Unifying Machine Learning Perspective

被引:0
作者
Hu, Zhiting [1 ]
Xing, Eric P. [1 ,2 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Petuum, Pittsburgh, PA USA
来源
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING | 2020年
关键词
machine learning; learning paradigms; unified formalism; compositionality;
D O I
10.1145/3394486.3406462
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contemporary Machine Learning and AI research has resulted in thousands of models (e.g., numerous deep networks, graphical models), learning paradigms (e.g., supervised, unsupervised, active, reinforcement, adversarial learning), optimization techniques (e.g., all kinds of optimization or stochastic sampling algorithms), not mentioning countless approximation heuristics, tuning tricks, and black-box oracles, plus combinations of all above. While pushing the field forward rapidly, these results also contributed to making ML/AI more like an alchemist's crafting workshop rather than a modern chemist's periodic table. It not only makes mastering existing ML techniques extremely difficult, but also makes standardized, reusable, repeatable, reliable, and explainable practice and further development of ML/AI products extremely costly, if possible at all. This tutorial presents a systematic, unified blueprint of ML, for both a refreshing holistic understanding of the diverse ML paradigms/algorithms, and guidance of operationalizing ML for creating problem solutions in a composable manner. The tutorial consists of three parts. The first part provides an overview of the current landscape of ML paradigms, with a focus on motivating a systematic perspective. The second part presents the blueprint from three aspects: objective function, optimization solver, and model architecture. We describe standardized formulations of the diverse objectives and algorithms, and a composable view of model structures. On this basis, the third part focuses on the operational side of ML. We describe principled module abstraction of ML building blocks. We show the abstraction enables efficient composition of ML solutions to problems in healthcare, manufacturing, vision/text generation.
引用
收藏
页码:3531 / 3532
页数:2
相关论文
共 27 条
  • [11] Harnessing Deep Neural Networks with Logic Rules
    Hu, Zhiting
    Ma, Xuezhe
    Liu, Zhengzhong
    Hovy, Eduard
    Xing, Eric P.
    [J]. PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2016, : 2410 - 2420
  • [12] Hu Zhiting, 2018, NEURIPS
  • [13] Hu Zhiting, 2019, Advances in Neural Information Processing Systems, V32
  • [14] An introduction to variational methods for graphical models
    Jordan, MI
    Ghahramani, Z
    Jaakkola, TS
    Saul, LK
    [J]. MACHINE LEARNING, 1999, 37 (02) : 183 - 233
  • [15] Kingma D. P., 2013, INT C LEARN REPR
  • [16] Levine S., 2018, Reinforcement learning and control as probabilistic inference: Tutorial and review
  • [17] Li CZ, 2019, AAAI CONF ARTIF INTE, P996
  • [18] Pathak D., 2017, ICML
  • [19] Ranzato MarcAurelio, 2015, CoRR
  • [20] Salakhutdinov R., 2017, PR MACH LEARN RES, DOI 10.5555/3305381.3305545