Enhance explainability of manifold learning

被引:38
|
作者
Han, Henry [1 ]
Li, Wentian [2 ]
Wang, Jiacun [3 ]
Qin, Guimin [4 ]
Qin, Xianya [5 ]
机构
[1] Baylor Univ, Sch Engn & Comp Sci, Dept Comp Sci, Waco, TX 76706 USA
[2] Northwell Hlth, Feinstein Inst Med Res, Manhasset, NY 11030 USA
[3] Monmouth Univ, Dept Comp Sci & Software Engn, West Long Branch, NJ 07764 USA
[4] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
[5] Fordham Univ, Gabelli Business Sch, New York, NY 10023 USA
关键词
Explainable AI; Manifold learning; t-SNE; UMAP; Dimension reduction; Locally isometric; NONLINEAR DIMENSIONALITY REDUCTION; HETEROGENEITY;
D O I
10.1016/j.neucom.2022.05.119
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The explainability of manifold learning is rarely investigated though there is an urgent need from both AI theory and practice. In this study, we propose a novel degree of locality preservation (DLP) approach to study the interpretability of manifold learning. We estimate the DLPs of the state-of-the-art manifold learning methods: t-SNE and UMAP as well as related methods: LLE, HLLE, and LTSA along with widely used PCA across benchmark datasets classified as low-dimensional and high-dimensional data. Our study provides well-founded explanations of the manifold learning methods in terms of the DLPs. The order of their DLPs follows t-SNE> UMAP> LLE> HLLE/PCA/LTSA, though it may have some exceptions for some high-dimensional data. Both t-SNE and UMAP demonstrate an embedding distance amplification mechanism under the Euclidean distance that forces the latent local data geometry to stand out in dimension reduction. It not only explains why t-SNE and UMAP have higher DLPs than other peers, but also indicates they are not locally isometric under the Euclidean distance. Furthermore, it discovers that t-SNE and UMAP embeddings demonstrate similar nonlinear nature in dimension reduction, besides larger (smaller) data variances for low (high)-dimensional data. To the best of our knowledge, this study is the first work about the explainability of manifold learning. The proposed methods and corresponding results can be also extended to other dimension reduction techniques.(c) 2022 Published by Elsevier B.V.
引用
收藏
页码:877 / 895
页数:19
相关论文
共 50 条
  • [1] A label distribution manifold learning algorithm
    Tan, Chao
    Chen, Sheng
    Geng, Xin
    Ji, Genlin
    PATTERN RECOGNITION, 2023, 135
  • [2] Manifold Learning: What, How, and Why
    Meila, Marina
    Zhang, Hanyu
    ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION, 2024, 11 : 393 - 417
  • [3] Manifold learning in atomistic simulations: a conceptual review
    Rydzewski, Jakub
    Chen, Ming
    Valsson, Omar
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (03):
  • [4] Neuron Manifold Distillation for Edge Deep Learning
    Tao, Zeyi
    Xia, Qi
    Li, Qun
    2021 IEEE/ACM 29TH INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE (IWQOS), 2021,
  • [5] Local distances preserving based manifold learning
    Hajizadeh, Rassoul
    Aghagolzadeh, A.
    Ezoji, M.
    EXPERT SYSTEMS WITH APPLICATIONS, 2020, 139
  • [6] Adaptive Manifold Learning
    Zhang, Zhenyue
    Wang, Jing
    Zha, Hongyuan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (02) : 253 - 265
  • [7] Learning by Autonomous Manifold Deformation with an Intrinsic Deforming Field
    Zhuang, Xiaodong
    Mastorakis, Nikos
    SYMMETRY-BASEL, 2023, 15 (11):
  • [8] Looking Through the Deep Glasses: How Large Language Models Enhance Explainability of Deep Learning Models
    Spitzer, Philipp
    Celis, Sebastian
    Martin, Dominik
    Kuehl, Niklas
    Satzger, Gerhard
    PROCEEDINGS OF THE 2024 CONFERENCE ON MENSCH UND COMPUTER, MUC 2024, 2024, : 566 - 570
  • [9] Assessing Explainability in Reinforcement Learning
    ZeIvelder, Amber E.
    Westberg, Marcus
    Framling, Kary
    EXPLAINABLE AND TRANSPARENT AI AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2021, 2021, 12688 : 223 - 240
  • [10] A Supervised Manifold Learning Method
    Li, Zuojin
    Shi, Weiren
    Shi, Xin
    Zhong, Zhi
    COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2009, 6 (02) : 205 - 215