Self-supervised Siamese Autoencoders

被引:1
|
作者
Baier, Friederike [1 ]
Mair, Sebastian [2 ]
Fadel, Samuel G. [3 ]
机构
[1] Leuphana Univ Luneburg, Luneburg, Germany
[2] Uppsala Univ, Uppsala, Sweden
[3] Linkoping Univ, Linkoping, Sweden
关键词
Self-supervised learning; representation learning; Siamese networks; denoising autoencoder; pre-training; image classification;
D O I
10.1007/978-3-031-58547-0_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In contrast to fully-supervised models, self-supervised representation learning only needs a fraction of data to be labeled and often achieves the same or even higher downstream performance. The goal is to pre-train deep neural networks on a self-supervised task, making them able to extract meaningful features from raw input data afterwards. Previously, autoencoders and Siamese networks have been successfully employed as feature extractors for tasks such as image classification. However, both have their individual shortcomings and benefits. In this paper, we combine their complementary strengths by proposing a new method called SidAE (Siamese denoising autoencoder). Using an image classification downstream task, we show that our model outperforms two self-supervised baselines across multiple data sets and scenarios. Crucially, this includes conditions in which only a small amount of labeled data is available. Empirically, the Siamese component has more impact, but the denoising autoencoder is nevertheless necessary to improve performance.
引用
收藏
页码:117 / 128
页数:12
相关论文
共 50 条
  • [1] Self-supervised autoencoders for clustering and classification
    Nousi, Paraskevi
    Tefas, Anastasios
    EVOLVING SYSTEMS, 2020, 11 (03) : 453 - 466
  • [2] Self-supervised autoencoders for clustering and classification
    Paraskevi Nousi
    Anastasios Tefas
    Evolving Systems, 2020, 11 : 453 - 466
  • [3] SELF-SUPERVISED SPEAKER VERIFICATION WITH SIMPLE SIAMESE NETWORK AND SELF-SUPERVISED REGULARIZATION
    Sang, Mufan
    Li, Haoqi
    Liu, Fang
    Arnold, Andrew O.
    Wan, Li
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6127 - 6131
  • [4] Self-Supervised Autoencoders for Visual Anomaly Detection
    Bauer, Alexander
    Nakajima, Shinichi
    Mueller, Klaus-Robert
    MATHEMATICS, 2024, 12 (24)
  • [5] GraphMAE: Self-Supervised Masked Graph Autoencoders
    Hou, Zhenyu
    Liu, Xiao
    Cen, Yukuo
    Dong, Yuxiao
    Yang, Hongxia
    Wang, Chunjie
    Tang, Jie
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 594 - 604
  • [6] Self-supervised Bernoulli Autoencoders for Semi-supervised Hashing
    Nanculef, Ricardo
    Mena, Francisco
    Macaluso, Antonio
    Lodi, Stefano
    Sartori, Claudio
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2021, 2021, 12702 : 258 - 268
  • [7] Contrastive Masked Autoencoders for Self-Supervised Video Hashing
    Wang, Yuting
    Wang, Jinpeng
    Chen, Bin
    Zeng, Ziyun
    Xia, Shu-Tao
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 2733 - 2741
  • [8] Denoising Diffusion Autoencoders are Unified Self-supervised Learners
    Xiang, Weilai
    Yang, Hongyu
    Huang, Di
    Wang, Yunhong
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15756 - 15766
  • [9] Masked Autoencoders for Point Cloud Self-supervised Learning
    Pang, Yatian
    Wang, Wenxiao
    Tay, Francis E. H.
    Liu, Wei
    Tian, Yonghong
    Yuan, Li
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 604 - 621
  • [10] Cascaded Siamese Self-supervised Audio to Video GAN
    Aldausari, Nuha
    Sowmya, Arcot
    Marcus, Nadine
    Mohammadi, Gelareh
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 4690 - 4699