Interactive machine learning for health informatics: when do we need the human-in-the-loop?

被引:493
作者
Holzinger A. [1 ,2 ]
机构
[1] Research Unit, HCI-KDD, Institute for Medical Informatics, Statistics & Documentation, Medical University Graz, Graz
[2] Institute for Information Systems and Computer Media, Graz University of Technology, Graz
关键词
Health informatics; Interactive machine learning;
D O I
10.1007/s40708-016-0042-6
中图分类号
学科分类号
摘要
Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as “algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.” This “human-in-the-loop” can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase. © 2016, The Author(s).
引用
收藏
页码:119 / 131
页数:12
相关论文
共 98 条
[1]  
Samuel A.L., Some studies in machine learning using the game of checkers, IBM J Res Dev, 3, 3, pp. 210-229, (1959)
[2]  
Jordan M.I., Mitchell T.M., Machine learning: trends, perspectives, and prospects, Science, 349, 6245, pp. 255-260, (2015)
[3]  
LeCun Y., Bengio Y., Hinton G., Deep learning, Nature, 521, 7553, pp. 436-444, (2015)
[4]  
Bayes T., An essay towards solving a problem in the doctrine of chances (posthumous communicated by Richard Price), Philos Trans, 53, pp. 370-418, (1763)
[5]  
Barnard G.A., Bayes T., Studies in the history of probability and statistics: IX. Thomas Bayes’s essay towards solving a problem in the doctrine of chances, Biometrika, 45, 3-4, pp. 293-315, (1958)
[6]  
Hastie T., Tibshirani R., Friedman J., The elements of statistical learning: data mining, inference, and prediction, (2009)
[7]  
Murphy K.P., Machine learning: a probabilistic perspective, (2012)
[8]  
Silver D., Huang A., Maddison C.J., Guez A., Sifre L., van den Driessche G., Schrittwieser J., Antonoglou I., Panneershelvam V., Lanctot M., Dieleman S., Grewe D., Nham J., Kalchbrenner N., Sutskever I., Lillicrap T., Leach M., Kavukcuoglu K., Graepel T., Hassabis D., Mastering the game of go with deep neural networks and tree search, Nature, 529, 7587, pp. 484-489, (2016)
[9]  
Zhong N., Liu J.M., Yao Y.Y., Wu J.L., Lu S.F., Qin Y.L., Li K.C., Wah B., Web intelligence meets brain informatics, pp. 1-31, (2007)
[10]  
Holzinger A., Trends in interactive knowledge discovery for personalized medicine: cognitive science meets machine learning, IEEE Intell Inform Bull, 15, 1, pp. 6-14, (2014)