The importance of interpretability and visualization in machine learning for applications in medicine and health care

被引:0
作者
Alfredo Vellido
机构
[1] Universitat Politècnica de Catalunya,Computer Science Department, Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center
来源
Neural Computing and Applications | 2020年 / 32卷
关键词
Interpretability; Explainability; Machine learning; Visualization; Medicine; Health care;
D O I
暂无
中图分类号
学科分类号
摘要
In a short period of time, many areas of science have made a sharp transition towards data-dependent methods. In some cases, this process has been enabled by simultaneous advances in data acquisition and the development of networked system technologies. This new situation is particularly clear in the life sciences, where data overabundance has sparked a flurry of new methodologies for data management and analysis. This can be seen as a perfect scenario for the use of machine learning and computational intelligence techniques to address problems in which more traditional data analysis approaches might struggle. But, this scenario also poses some serious challenges. One of them is model interpretability and explainability, especially for complex nonlinear models. In some areas such as medicine and health care, not addressing such challenge might seriously limit the chances of adoption, in real practice, of computer-based systems that rely on machine learning and computational intelligence methods for data analysis. In this paper, we reflect on recent investigations about the interpretability and explainability of machine learning methods and discuss their impact on medicine and health care. We pay specific attention to one of the ways in which interpretability and explainability in this context can be addressed, which is through data and model visualization. We argue that, beyond improving model interpretability as a goal in itself, we need to integrate the medical experts in the design of data analysis interpretation strategies. Otherwise, machine learning is unlikely to become a part of routine clinical and health care practice.
引用
收藏
页码:18069 / 18083
页数:14
相关论文
共 191 条
  • [1] Britton D(2014)How to deal with petabytes of data: the LHC Grid project Rep Prog Phys 77 065902-260
  • [2] Lloyd SL(2015)The Higgs machine learning challenge J Phys Conf 664 072015-52160
  • [3] Adam-Bourdarios C(2013)Biology: the big challenges of big data Nature 498 255-99
  • [4] Cowan G(2010)The case for cloud computing in genome informatics Genome Biol 11 207-56
  • [5] Germain-Renaud C(2015)Big data: Astronomical or genomical? PLoS Biol 13 e1002195-100
  • [6] Guyon I(2018)Peeking inside the black-box: a survey on explainable Artificial Intelligence (XAI) IEEE Access 6 52138-175
  • [7] Kégl B(2014)The weaponization of increasingly autonomous technologies: considering how meaningful human control might move discussion forward UNIDIR Resour 2 1-343
  • [8] Rousseau D(2016)Of robots and rules: autonomous weapon systems in the law of armed conflict Geo J Int Law (Georgetown J of Int Law) 48 1337-405
  • [9] Marx V(2018)The fallacy of inscrutability Philos Trans R Soc A 376 20180084-348
  • [10] Stein LD(2017)European Union regulations on algorithmic decision making and a “right to explanation AI Magz 38 76-518