SiT: Exploring Flow and Diffusion-Based Generative Models with Scalable Interpolant Transformers

被引:1
作者
Ma, Nanye [1 ]
Goldstein, Mark [1 ]
Albergo, Michael S. [1 ]
Boffi, Nicholas M. [1 ]
Vanden-Eijnden, Eric [1 ]
Xie, Saining [1 ]
机构
[1] NYU, New York, NY 10016 USA
来源
COMPUTER VISION - ECCV 2024, PT LXXVII | 2024年 / 15135卷
关键词
D O I
10.1007/978-3-031-72980-5_2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present Scalable Interpolant Transformers (SiT), a family of generative models built on the backbone of Diffusion Transformers (DiT). The interpolant framework, which allows for connecting two distributions in a more flexible way than standard diffusion models, makes possible a modular study of various design choices impacting generative models built on dynamical transport: learning in discrete or continuous time, the objective function, the interpolant that connects the distributions, and deterministic or stochastic sampling. By carefully introducing the above ingredients, SiT surpasses DiT uniformly across model sizes on the conditional ImageNet 256 x 256 and 512 x 512 benchmark using the exact same model structure, number of parameters, and GFLOPs. By exploring various diffusion coefficients, which can be tuned separately from learning, SiT achieves an FID-50K score of 2.06 and 2.62, respectively. Code is available here: https://github.com/willisma/SiT.
引用
收藏
页码:23 / 40
页数:18
相关论文
共 65 条
  • [1] Albergo MS, 2023, Arxiv, DOI arXiv:2310.03695
  • [2] Albergo MS, 2024, Arxiv, DOI arXiv:2310.03725
  • [3] Albergo MS, 2023, Arxiv, DOI [arXiv:2303.08797, DOI 10.48550/ARXIV.2303.08797]
  • [4] Albergo M, 2022, Arxiv, DOI arXiv:2209.15571
  • [5] Ben-Hamu H, 2022, PR MACH LEARN RES
  • [6] Benton J, 2024, Arxiv, DOI arXiv:2305.16860
  • [7] Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models
    Blattmann, Andreas
    Rombach, Robin
    Ling, Huan
    Dockhorn, Tim
    Kim, Seung Wook
    Fidler, Sanja
    Kreis, Karsten
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22563 - 22575
  • [8] Boffi N.M., 2023, arXiv
  • [9] Brock A., 2018, INT C LEARNING REPRE
  • [10] Transformer-based deep learning for predicting protein properties in the life sciences
    Chandra, Abel
    Tunnermann, Laura
    Lofstedt, Tommy
    Gratz, Regina
    [J]. ELIFE, 2023, 12