An Empirical Study of Self-Supervised Learning with Wasserstein Distance

被引:0
作者
Yamada, Makoto [1 ,2 ]
Takezawa, Yuki [1 ,3 ]
Houry, Guillaume [1 ,4 ]
Dusterwald, Kira Michaela [1 ,5 ]
Sulem, Deborah [6 ]
Zhao, Han [7 ]
Tsai, Yao-Hung [1 ,8 ]
机构
[1] Okinawa Inst Sci & Technol, Machine Learning & Data Sci Unit, Okinawa 9040412, Japan
[2] Ctr Adv Intelligence Project RIKEN, Tokyo 1030027, Japan
[3] Kyoto Univ, Dept Intelligence Sci & Technol, Kyoto 6068501, Japan
[4] Paris Saclay Ecole Normale Super, F-75005 Paris, France
[5] UCL, Gatsby Computat Neurosci Unit, London WC1E 6BT, England
[6] Univ Pompeu Fabra, Barcelona Sch Econ, Barcelona 08002, Spain
[7] Univ Illinois, Dept Comp Sci, Champaign, IL 61801 USA
[8] Carnegie Mellon Univ, Sch Comp Sci, Machine Learning Dept, Pittsburgh, PA 15213 USA
关键词
optimal transport; Wasserstein distance; self-supervised learning;
D O I
10.3390/e26110939
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
In this study, we consider the problem of self-supervised learning (SSL) utilizing the 1-Wasserstein distance on a tree structure (a.k.a., Tree-Wasserstein distance (TWD)), where TWD is defined as the L1 distance between two tree-embedded vectors. In SSL methods, the cosine similarity is often utilized as an objective function; however, it has not been well studied when utilizing the Wasserstein distance. Training the Wasserstein distance is numerically challenging. Thus, this study empirically investigates a strategy for optimizing the SSL with the Wasserstein distance and finds a stable training procedure. More specifically, we evaluate the combination of two types of TWD (total variation and ClusterTree) and several probability models, including the softmax function, the ArcFace probability model, and simplicial embedding. We propose a simple yet effective Jeffrey divergence-based regularization method to stabilize optimization. Through empirical experiments on STL10, CIFAR10, CIFAR100, and SVHN, we find that a simple combination of the softmax function and TWD can obtain significantly lower results than the standard SimCLR. Moreover, a simple combination of TWD and SimSiam fails to train the model. We find that the model performance depends on the combination of TWD and probability model, and that the Jeffrey divergence regularization helps in model training. Finally, we show that the appropriate combination of the TWD and probability model outperforms cosine similarity-based representation learning.
引用
收藏
页数:17
相关论文
共 60 条
[1]   DISCRETE COSINE TRANSFORM [J].
AHMED, N ;
NATARAJAN, T ;
RAO, KR .
IEEE TRANSACTIONS ON COMPUTERS, 1974, C 23 (01) :90-93
[2]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[3]  
Backurs A, 2020, PR MACH LEARN RES, V119
[4]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
[5]   SELF-ORGANIZING NEURAL NETWORK THAT DISCOVERS SURFACES IN RANDOM-DOT STEREOGRAMS [J].
BECKER, S ;
HINTON, GE .
NATURE, 1992, 355 (6356) :161-163
[6]  
Caron M, 2020, ADV NEUR IN, V33
[7]   Emerging Properties in Self-Supervised Vision Transformers [J].
Caron, Mathilde ;
Touvron, Hugo ;
Misra, Ishan ;
Jegou, Herve ;
Mairal, Julien ;
Bojanowski, Piotr ;
Joulin, Armand .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9630-9640
[8]  
Chen S, 2024, AAAI CONF ARTIF INTE, P20657
[9]  
Chen Ting, 2020, PMLR, P1597, DOI DOI 10.48550/ARXIV.2002.05709
[10]  
Chen XL, 2020, Arxiv, DOI [arXiv:2003.04297, 10.48550/arXiv.2003.04297, DOI 10.48550/ARXIV.2003.04297]