Personalizing Text-to-Image Generation via Aesthetic Gradients (Paper+Code)
In this paper, aesthetic gradients are proposed as a method for personalizing a CLIP-conditioned diffusion model by using a set of images to guide the generative process towards a custom aesthetic. The approach is validated through qualitative and quantitative experiments using the recently developed stable diffusion model and several aesthetically-filtered datasets.
Paper:
Personalizing Text-to-Image Generation via Aesthetic Gradients
This work proposes aesthetic gradients, a method to personalize aCLIP-conditioned diffusion model by guiding the generative process towardscustom aesthetics defined by the user from a set of images. The approach isvalidated with qualitative and quantitative experiments, using the recentstable di…

Code:
GitHub - vicgalle/stable-diffusion-aesthetic-gradients: Personalization for Stable Diffusion via Aesthetic Gradients 🎨
Personalization for Stable Diffusion via Aesthetic Gradients 🎨 - GitHub - vicgalle/stable-diffusion-aesthetic-gradients: Personalization for Stable Diffusion via Aesthetic Gradients 🎨
The following video by koiboi can be useful to explain how you can build one by yourself.
Do you like our work?
Consider becoming a paying subscriber to support us!
Signup to stay updated
No spam, no sharing to third party. Only you and me.