Spaces:
Sleeping
A newer version of the Gradio SDK is available:
5.9.1
title: EchoMimic
emoji: π¨
colorFrom: pink
colorTo: blue
sdk: gradio
sdk_version: 5.4.0
app_file: webgui.py
pinned: false
suggested_hardware: a10g-large
short_description: Audio-Driven Portrait Animations
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
π£ π£ Updates
- [2024.07.17] π₯π₯π₯ Accelerated models and pipe are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
- [2024.07.14] π₯ ComfyUI is now available. Thanks @smthemex for the contribution.
- [2024.07.13] π₯ Thanks NewGenAI for the video installation tutorial.
- [2024.07.13] π₯ We release our pose&audio driven codes and models.
- [2024.07.12] π₯ WebUI and GradioUI versions are released. We thank @greengerong @Robin021 and @O-O1024 for their contributions.
- [2024.07.12] π₯ Our paper is in public on arxiv.
- [2024.07.09] π₯ We release our audio driven codes and models.
Gallery
Audio Driven (Sing)
Audio Driven (English)
Audio Driven (Chinese)
Landmark Driven
Audio + Selected Landmark Driven
οΌSome demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.οΌ
Installation
Download the Codes
git clone https://github.com/BadToBest/EchoMimic
cd EchoMimic
Python Environment Setup
- Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
- Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
- Tested Python Version: 3.8 / 3.10 / 3.11
Create conda environment (Recommended):
conda create -n echomimic python=3.8
conda activate echomimic
Install packages with pip
pip install -r requirements.txt
Download ffmpeg-static
Download and decompress ffmpeg-static, then
export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static
Download pretrained weights
git lfs install
git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights
The pretrained_weights is organized as follows.
./pretrained_weights/
βββ denoising_unet.pth
βββ reference_unet.pth
βββ motion_module.pth
βββ face_locator.pth
βββ sd-vae-ft-mse
β βββ ...
βββ sd-image-variations-diffusers
β βββ ...
βββ audio_processor
βββ whisper_tiny.pt
In which denoising_unet.pth / reference_unet.pth / motion_module.pth / face_locator.pth are the main checkpoints of EchoMimic. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:
Audio-Drived Algo Inference
Run the python inference script:
python -u infer_audio2vid.py
python -u infer_audio2vid_pose.py
Audio-Drived Algo Inference On Your Own Cases
Edit the inference config file ./configs/prompts/animation.yaml, and add your own case:
test_cases:
"path/to/your/image":
- "path/to/your/audio"
The run the python inference script:
python -u infer_audio2vid.py
Motion Alignment between Ref. Img. and Driven Vid.
(Firstly download the checkpoints with '_pose.pth' postfix from huggingface)
Edit driver_video and ref_image to your path in demo_motion_sync.py, then run
python -u demo_motion_sync.py
Audio&Pose-Drived Algo Inference
Edit ./configs/prompts/animation_pose.yaml, then run
python -u infer_audio2vid_pose.py
Pose-Drived Algo Inference
Set draw_mouse=True in line 135 of infer_audio2vid_pose.py. Edit ./configs/prompts/animation_pose.yaml, then run
python -u infer_audio2vid_pose.py
Run the Gradio UI
Thanks to the contribution from @Robin021:
python -u webgui.py --server_port=3000
Release Plans
Status | Milestone | ETA |
---|---|---|
β | The inference source code of the Audio-Driven algo meet everyone on GitHub | 9th July, 2024 |
β | Pretrained models trained on English and Mandarin Chinese to be released | 9th July, 2024 |
β | The inference source code of the Pose-Driven algo meet everyone on GitHub | 13th July, 2024 |
β | Pretrained models with better pose control to be released | 13th July, 2024 |
β | Accelerated models to be released | 17th July, 2024 |
π | Pretrained models with better sing performance to be released | TBD |
π | Large-Scale and High-resolution Chinese-Based Talking Head Dataset | TBD |
Acknowledgements
We would like to thank the contributors to the AnimateDiff, Moore-AnimateAnyone and MuseTalk repositories, for their open research and exploration.
We are also grateful to V-Express and hallo for their outstanding work in the area of diffusion-based talking heads.
If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.
Citation
If you find our work useful for your research, please consider citing the paper :
@misc{chen2024echomimic,
title={EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning},
author={Zhiyuan Chen, Jiajiong Cao, Zhiquan Chen, Yuming Li, Chenguang Ma},
year={2024},
archivePrefix={arXiv},
primaryClass={cs.CV}
}