On the Versatile Uses of Partial Distance Correlation in Deep Learning

被引:13
作者
Zhen, Xingjian [1 ]
Meng, Zihang [1 ]
Chakraborty, Rudrasis [2 ]
Singh, Vikas [1 ]
机构
[1] Univ Wisconsin Madison, Madison, WI 53706 USA
[2] Butlr, Burlingame, CA USA
来源
COMPUTER VISION, ECCV 2022, PT XXVI | 2022年 / 13686卷
关键词
DEPENDENCE;
D O I
10.1007/978-3-031-19809-0_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Comparing the functional behavior of neural network models, whether it is a single network over time or two (or more networks) during or post-training, is an essential step in understanding what they are learning (and what they are not), and for identifying strategies for regularization or efficiency improvements. Despite recent progress, e.g., comparing vision transformers to CNNs, systematic comparison of function, especially across different networks, remains difficult and is often carried out layer by layer. Approaches such as canonical correlation analysis (CCA) are applicable in principle, but have been sparingly used so far. In this paper, we revisit a (less widely known) from statistics, called distance correlation (and its partial variant), designed to evaluate correlation between feature spaces of different dimensions. We describe the steps necessary to carry out its deployment for large scale models - this opens the door to a surprising array of applications ranging from conditioning one deep model w.r.t. another, learning disentangled representations as well as optimizing diverse models that would directly be more robust to adversarial attacks. Our experiments suggest a versatile regularizer (or constraint) with many advantages, which avoids some of the common difficulties one faces in such analyses (Code is at https://github.com/zhenxingjian/Partial Distance Correlation.).
引用
收藏
页码:327 / 346
页数:20
相关论文
共 69 条
  • [11] Cohen TS, 2016, PR MACH LEARN RES, V48
  • [12] Cover T. M., 2006, Elements of Information Theory, V2, DOI 10.1002/0471200611
  • [13] D'Angelo F, 2021, Arxiv, DOI arXiv:2106.10760
  • [14] Demontis A, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P321
  • [15] Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
  • [16] Certifying and Removing Disparate Impact
    Feldman, Michael
    Friedler, Sorelle A.
    Moeller, John
    Scheidegger, Carlos
    Venkatasubramanian, Suresh
    [J]. KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 259 - 268
  • [17] NEOCOGNITRON - A NEURAL NETWORK MODEL FOR A MECHANISM OF VISUAL-PATTERN RECOGNITION
    FUKUSHIMA, K
    MIYAKE, S
    ITO, T
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1983, 13 (05): : 826 - 834
  • [18] Discovering Representations for Black-box Optimization
    Gaier, Adam
    Asteroth, Alexander
    Mouret, Jean-Baptiste
    [J]. GECCO'20: PROCEEDINGS OF THE 2020 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2020, : 103 - 111
  • [19] Gao C, 2019, J MACH LEARN RES, V20
  • [20] Geirhos R., 2019, INT C LEARNING REPRE