If you like our work, please consider supporting us so we can keep doing what we do. And as a current subscriber, enjoy this nice discount!
Also: if you haven’t yet, follow us on Twitter, TikTok, or YouTube!
In 2022, Stable Diffusion was released as a deep learning, text-to-image model. This technique is primarily used for generating detailed images based on text descriptions, although it can also be used for other tasks such as inpainting, outpainting, and generating image-to-image translations.
Latent diffusion models, a type of deep generative neural networks developed by CompVis at LMU Munich, are referred to as stable diffusion models. With the support of EleutherAI and LAION, Stability AI has released the model as a collaboration between CompVis LMU, Runway, and Stability AI.
A public release of Stable Diffusion's code and model weights has been made, and it can run on most consumer hardware equipped with a modest GPU with at least 8GB of video RAM. The DALL-E and Midjourney proprietary text-to-image models were previously only accessible through cloud services.
The current repository is a modified version of the Stable Diffusion repository that is optimized to consume less VRAM by sacrificing the inference speed.
The following optimizations are used to reduce VRAM consumption:
- The stable diffusion model is divided into four components which are sent to the GPU only when they are required. They are then returned to the CPU after the calculation has been completed.
- In order to calculate attention, it is divided into parts.
It is recommended to download the repo and follow the instructions mentioned if you would like to run the stable diffusion with less VRAM.
Note that, If you have a GPU from the Nvidia GTX series, your output images may appear entirely green. The reason for this is that GTX series do not support half precision calculations, which is the default mode of calculation in this repository. To resolve this issue, use the --precision full argument. It has the disadvantage of increasing GPU VRAM usage.
Do you want to work with us on any of these problems?
We work on this and many such exciting problems, sometimes for fun and sometimes with commercial aspects. If you have ideas or want to explore these fields with us, please get in touch with us or reply to this email.