If you like our work, please consider supporting us so we can keep doing what we do. And as a current subscriber, enjoy this nice discount!

Also: if you haven’t yet, follow us on Twitter, TikTok, or YouTube!


Image animation

Yesterday I posted how you can build images from the text on your mac. In our WhatsApp group, I got the following message

i have an application idea where i need the AI character to talk by using text to speach but how  do we achieve the emotion and expression any suggestion ..

So I thought, why not we work towards achieving this, starting from animation?

Today we will demonstrate how to convert an image to an animation using the code provided in the paper below.

Thin-Plate Spline Motion Model for Image Animation
Image animation brings life to the static object in the source imageaccording to the driving video. Recent works attempt to perform motion transferon arbitrary objects through unsupervised methods without using a prioriknowledge. However, it remains a significant challenge for current unsupervise…

Abstract:

Image animation brings life to the static object in the source image according to the driving video. Recent works attempt to perform motion transfer on arbitrary objects through unsupervised methods without using a priori knowledge. However, it remains a significant challenge for current unsupervised methods when there is a large pose gap between the objects in the source and driving images. In this paper, a new end-to-end unsupervised motion transfer framework is proposed to overcome such issue. Firstly, we propose thin-plate spline motion estimation to produce a more flexible optical flow, which warps the feature maps of the source image to the feature domain of the driving image. Secondly, in order to restore the missing regions more realistically, we leverage multi-resolution occlusion masks to achieve more effective feature fusion. Finally, additional auxiliary loss functions are designed to ensure that there is a clear division of labor in the network modules, encouraging the network to generate high-quality images. Our method can animate a variety of objects, including talking faces, human bodies, and pixel animations. Experiments demonstrate that our method performs better on most benchmarks than the state of the art with visible improvements in pose-related metrics.

We will discuss the details of this paper in an upcoming post. Here let me share the steps I have taken to create an image animation.

  1. Generate an image using Stable Diffusion. (Dreamstudio/ DALL-E can be your place to do that). I generated the following image. I also fetched a synthetic image from https://thispersondoesnotexist.com

2. Take a video clip. I have used the one provided in the paper but you can use any small video clip.

0:00
/

3. Using the code provided in the above paper, I generated the following two videos.

0:00
/
0:00
/

You can experiment with the code at Google Colab here:

Google Colaboratory

In upcoming posts we will talk about a few more open-source libraries and how they can be used to achieve the goal outlined above.


Do you like our work?
Consider becoming a paying subscriber to support us!