Introduction#
This article provides a brief introduction to AnimateDiff.
AnimateDiff allows for the personalization of adding animation effects to text-to-image generation and supports the SD WebUI plugin.
Main Content#
1. What is AnimateDiff?#
AnimateDiff enables users to add animation effects to personalized text-to-image diffusion models without specific adjustments.
2. Using and Training AnimateDiff#
To use AnimateDiff, it is necessary to use a base text-to-image generation model and fine-tune the motion module model. The official website provides pre-trained personalized models. With these, animations can be generated by running specific commands.
- Install the main project
git clone https://github.com/guoyww/AnimateDiff.git
cd AnimateDiff
conda env create -f environment.yaml
conda activate animatediff
- Use the base text-to-image generation model with the SD model
Download the model from this location: https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
- Download the fine-tuned parameters for the motion module
Download from this location: https://huggingface.co/guoyww/animatediff
- Download the officially provided pre-trained models
bash download_bashscripts/1-ToonYou.sh
bash download_bashscripts/2-Lyriel.sh
bash download_bashscripts/3-RcnzCartoon.sh
bash download_bashscripts/4-MajicMix.sh
bash download_bashscripts/5-RealisticVision.sh
bash download_bashscripts/6-Tusun.sh
bash download_bashscripts/7-FilmVelvia.sh
bash download_bashscripts/8-GhibliBackground.sh
The models are available on CivitAI.
- Run
An example of generating an animation: python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
3. Conclusion#
Currently, AnimateDiff also supports running with the SD WebUI plugin, which is very convenient. Various excellent SD-based models can be used as the base text-to-image generation model.
Final Notes#
References:
Disclaimer#
This article is solely for personal learning purposes.
This article is synchronized with HBlog.