DDIMCACHE: AN ENHANCED TEXT-TO-IMAGE DIFFUSION MODEL ON MOBILE DEVICES

被引:0
作者
Wu, Qifeng [1 ]
机构
[1] Gannan Univ Sci & Technol, Ganzhou 341000, Jiangxi, Peoples R China
关键词
diffusion model; text-to-image; mobile devices;
D O I
10.14736/kyb-2024-6-0819
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
On June 11, 2024, OpenAI announced a collaboration with Apple to deeply integrate the ChatGPT generative language model into Apple's product lineup. With support from various generative AI models, devices like smartphones will become more intelligent. The text-to-image diffusion model, known for its stable and superior generative capabilities, has gained wide recognition in image generation and will undoubtedly play a crucial role on mobile devices. However, the large size and complex architecture of diffusion models result in high computational costs and slower execution speeds. As a result, diffusion models require high-end GPUs or cloudbased inference, which often raises personal privacy and data security. This paper presents a multiplicative effect joint optimization method for complex models such as diffusion models, enabling efficient execution on mobile devices. The method integrates multiple optimization strategies, leveraging their interactions to create synergies and enhance overall performance. Building on this multiplicative effect joint optimization approach, we have introduced DDIMCache, an enhanced text-to-image diffusion model. DDIMCache maintains image generation quality while achieving optimal speed, generating 512-512 images in approximately 6 seconds. This provides powerful image generation capabilities and an enhanced user experience for mobile users.In addition, as a foundation model, Stable Diffusion supports more applications such as image editing, inpainting, style transfer, and super-resolution, all of which can have a significant impact. The ability to run the model entirely on mobile devices without an internet connection will open up endless possibilities.
引用
收藏
页码:819 / 833
页数:15
相关论文
共 13 条
  • [1] Rombach R., Blattmann A., Lorenz D., Et al., High-resolution image synthesis with latent diffusion models, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684-10695, (2022)
  • [2] Hou J., Asghar Z., World’s first on-device demonstration of stable diffusion on an android phone, Qualcomm, 24, (2023)
  • [3] Chenm Y. H., Sarokin R., Lee J., Et al., Speed is all you need: On-device acceleration of large diffusion models via gpu-aware optimizations, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4651-4655, (2023)
  • [4] Shang Y., Yuan Z., Et al., Post-training quantization on diffusion models, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1972-1981, (2023)
  • [5] Li X., Liu Y., Lian L., Et al., Q-diffusion: Quantizing diffusion models, Proc. IEEE/CVF International Conference on Computer Vision, 2023, pp. 17535-17545
  • [6] Ma X., Fang G., Wang X., Llm-pruner: On the structural pruning of large language models, Adv. Neural Inform. Process. Systems, 36, pp. 21702-21720, (2023)
  • [7] Li Y., Yuan G., Wen Y., Et al., Efficientformer: Vision transformers at mobilenet speed, Adv. Neural Inform. Process. Systems, 35, pp. 12934-12949, (2022)
  • [8] Sohl-Dickstein J., Weiss E., Maheswaranathan N., Et al., Deep unsupervised learning using nonequilibrium thermodynamics, International Conference on Machine Learning PMLR, pp. 2256-2265, (2015)
  • [9] Song Jiaming, Meng Chenlin, Ermon Stefano, Denoising diffusion implicit models, (2020)
  • [10] Jain S. M., Hugging face. Introduction to transformers for NLP: With the hugging face library and models to solve problems, pp. 51-67, (2022)