Animate stable diffusion by interpolating between two cues
Code: https://github.com/andreasjansson/ cog-stable-diffusion/tree/animation
How does it work?
start with noise, then We use stable diffusion denoising n steps towards the midpoint between the start and end cues, where
n=num_inference_steps ( 1 - prompt_strength). The higher the cue strength, the fewer steps toward the midpoint.
Then we output from middle noise to
num_animation_frames between start and end cues the interpolation point. By starting from an intermediate output, the model will generate samples that are similar to each other, resulting in smoother animations.
Finally, the generated samples are Google’s FILM (Frame Interpolation for Large Scene Motion) for extra smoothness.