Adding Conditional Control to Text-to-Image Diffusion Models

被引:1181
作者
Zhang, Lvmin [1 ]
Rao, Anyi [1 ]
Agrawala, Maneesh [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV | 2023年
关键词
D O I
10.1109/ICCV51070.2023.00355
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the productionready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, e.g., edges, depth, segmentation, human pose, etc., with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.
引用
收藏
页码:3813 / 3824
页数:12
相关论文
共 98 条
[1]  
Afrin Sadia, 2020, WEIGHT INITIALIZATIO
[2]  
Aghajanyan A., 2021, P 59 ANN M ASS COMPU, P7319
[3]  
Alaluf Yuval, 2021, ACM T GRAPHICS TOG, V40
[4]  
Alaluf Yuval, 2022, P IEEE CVF C COMP VI, P18511
[5]  
[Anonymous], 2021, ADV NEURAL INFORM PR, DOI DOI 10.1080/20477724.2021.1951556
[6]  
[Anonymous], 2022, Alembics: Disco diffusion
[7]   Blended Diffusion for Text-driven Editing of Natural Images [J].
Avrahami, Omri ;
Lischinski, Dani ;
Fried, Ohad .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :18187-18197
[8]  
Avrahami Omri, 2022, ARXIV221114305
[9]  
Bar-Tal Omer, 2023, ARXIV230208113
[10]  
Bashkirova Dina, 2023, ARXIV230205496