LivePortrait
ONNX
cleardusk commited on
Commit
3a3c029
β€’
1 Parent(s): 477e235

doc: update readme

Browse files
Files changed (1) hide show
  1. README.md +44 -25
README.md CHANGED
@@ -3,8 +3,6 @@ license: mit
3
  library_name: liveportrait
4
  ---
5
 
6
- <h1 align="center">LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control</h1>
7
-
8
  <div align='center'>
9
  <a href='https://github.com/cleardusk' target='_blank'><strong>Jianzhu Guo</strong></a><sup> 1†</sup>&emsp;
10
  <a href='https://github.com/KwaiVGI' target='_blank'><strong>Dingyun Zhang</strong></a><sup> 1,2</sup>&emsp;
@@ -21,13 +19,17 @@ library_name: liveportrait
21
  <div align='center'>
22
  <sup>1 </sup>Kuaishou Technology&emsp; <sup>2 </sup>University of Science and Technology of China&emsp; <sup>3 </sup>Fudan University&emsp;
23
  </div>
 
 
 
24
 
25
  <br>
26
- <div align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
27
  <!-- <a href='LICENSE'><img src='https://img.shields.io/badge/license-MIT-yellow'></a> -->
28
  <a href='https://arxiv.org/pdf/2407.03168'><img src='https://img.shields.io/badge/arXiv-LivePortrait-red'></a>
29
  <a href='https://liveportrait.github.io'><img src='https://img.shields.io/badge/Project-LivePortrait-green'></a>
30
  <a href='https://huggingface.co/spaces/KwaiVGI/liveportrait'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
 
31
  </div>
32
  <br>
33
 
@@ -40,38 +42,47 @@ library_name: liveportrait
40
 
41
 
42
  ## πŸ”₯ Updates
43
- - **`2024/07/10`**: πŸ’ͺ We support audio and video concatenating, driving video auto-cropping, and template making to protect privacy. More to see [here](docs/changelog/2024-07-10.md).
 
 
44
  - **`2024/07/09`**: πŸ€— We released the [HuggingFace Space](https://huggingface.co/spaces/KwaiVGI/liveportrait), thanks to the HF team and [Gradio](https://github.com/gradio-app/gradio)!
45
  - **`2024/07/04`**: 😊 We released the initial version of the inference code and models. Continuous updates, stay tuned!
46
  - **`2024/07/04`**: πŸ”₯ We released the [homepage](https://liveportrait.github.io) and technical report on [arXiv](https://arxiv.org/pdf/2407.03168).
47
 
48
 
49
-
50
- ## Introduction
51
  This repo, named **LivePortrait**, contains the official PyTorch implementation of our paper [LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control](https://arxiv.org/pdf/2407.03168).
52
  We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) πŸ’–.
53
 
54
- ## πŸ”₯ Getting Started
55
  ### 1. Clone the code and prepare the environment
56
  ```bash
57
  git clone https://github.com/KwaiVGI/LivePortrait
58
  cd LivePortrait
59
 
60
  # create env using conda
61
- conda create -n LivePortrait python==3.9.18
62
  conda activate LivePortrait
 
63
  # install dependencies with pip
 
64
  pip install -r requirements.txt
 
 
65
  ```
66
 
67
- **Note:** make sure your system has [FFmpeg](https://ffmpeg.org/) installed!
68
 
69
  ### 2. Download pretrained weights
70
 
71
  The easiest way to download the pretrained weights is from HuggingFace:
72
  ```bash
73
- # you may need to run `git lfs install` first
74
- git clone https://huggingface.co/KwaiVGI/liveportrait pretrained_weights
 
 
 
 
75
  ```
76
 
77
  Alternatively, you can download all pretrained weights from [Google Drive](https://drive.google.com/drive/folders/1UtKgzKjFAOmZkhNK-OYT0caJ_w2XAnib) or [Baidu Yun](https://pan.baidu.com/s/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn). Unzip and place them in `./pretrained_weights`.
@@ -99,10 +110,14 @@ pretrained_weights
99
 
100
  #### Fast hands-on
101
  ```bash
 
102
  python inference.py
 
 
 
103
  ```
104
 
105
- If the script runs successfully, you will get an output mp4 file named `animations/s6--d0_concat.mp4`. This file includes the following results: driving video, input image, and generated result.
106
 
107
  <p align="center">
108
  <img src="./docs/inference.gif" alt="image">
@@ -111,18 +126,18 @@ If the script runs successfully, you will get an output mp4 file named `animatio
111
  Or, you can change the input by specifying the `-s` and `-d` arguments:
112
 
113
  ```bash
 
114
  python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4
115
 
116
- # disable pasting back to run faster
117
- python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 --no_flag_pasteback
118
 
119
  # more options to see
120
  python inference.py -h
121
  ```
122
 
123
- #### Driving video auto-cropping
124
-
125
- πŸ“• To use your own driving video, we **recommend**:
126
  - Crop it to a **1:1** aspect ratio (e.g., 512x512 or 256x256 pixels), or enable auto-cropping by `--flag_crop_driving_video`.
127
  - Focus on the head area, similar to the example videos.
128
  - Minimize shoulder movement.
@@ -133,22 +148,25 @@ Below is a auto-cropping case by `--flag_crop_driving_video`:
133
  python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video
134
  ```
135
 
136
- If you find the results of auto-cropping is not well, you can modify the `--scale_crop_video`, `--vy_ratio_crop_video` options to adjust the scale and offset, or do it manually.
137
 
138
  #### Motion template making
139
  You can also use the auto-generated motion template files ending with `.pkl` to speed up inference, and **protect privacy**, such as:
140
  ```bash
141
- python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl
 
142
  ```
143
 
144
- **Discover more interesting results on our [Homepage](https://liveportrait.github.io)** 😊
145
-
146
  ### 4. Gradio interface πŸ€—
147
 
148
  We also provide a Gradio <a href='https://github.com/gradio-app/gradio'><img src='https://img.shields.io/github/stars/gradio-app/gradio'></a> interface for a better experience, just run by:
149
 
150
  ```bash
 
151
  python app.py
 
 
 
152
  ```
153
 
154
  You can specify the `--server_port`, `--share`, `--server_name` arguments to satisfy your needs!
@@ -158,7 +176,7 @@ You can specify the `--server_port`, `--share`, `--server_name` arguments to sat
158
  # enable torch.compile for faster inference
159
  python app.py --flag_do_torch_compile
160
  ```
161
- **Note**: This method has not been fully tested. e.g., on Windows.
162
 
163
  **Or, try it out effortlessly on [HuggingFace](https://huggingface.co/spaces/KwaiVGI/LivePortrait) πŸ€—**
164
 
@@ -166,6 +184,7 @@ python app.py --flag_do_torch_compile
166
  We have also provided a script to evaluate the inference speed of each module:
167
 
168
  ```bash
 
169
  python speed.py
170
  ```
171
 
@@ -187,14 +206,14 @@ Discover the invaluable resources contributed by our community to enhance your L
187
 
188
  - [ComfyUI-LivePortraitKJ](https://github.com/kijai/ComfyUI-LivePortraitKJ) by [@kijai](https://github.com/kijai)
189
  - [comfyui-liveportrait](https://github.com/shadowcz007/comfyui-liveportrait) by [@shadowcz007](https://github.com/shadowcz007)
 
190
  - [LivePortrait hands-on tutorial](https://www.youtube.com/watch?v=uyjSTAOY7yI) by [@AI Search](https://www.youtube.com/@theAIsearch)
191
  - [ComfyUI tutorial](https://www.youtube.com/watch?v=8-IcDDmiUMM) by [@Sebastian Kamph](https://www.youtube.com/@sebastiankamph)
192
- - [LivePortrait In ComfyUI](https://www.youtube.com/watch?v=aFcS31OWMjE) by [@Benji](https://www.youtube.com/@TheFutureThinker)
193
  - [Replicate Playground](https://replicate.com/fofr/live-portrait) and [cog-comfyui](https://github.com/fofr/cog-comfyui) by [@fofr](https://github.com/fofr)
194
 
195
  And many more amazing contributions from our community!
196
 
197
- ## Acknowledgements
198
  We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface) repositories, for their open research and contributions.
199
 
200
  ## Citation πŸ’–
@@ -206,4 +225,4 @@ If you find LivePortrait useful for your research, welcome to 🌟 this repo and
206
  journal = {arXiv preprint arXiv:2407.03168},
207
  year = {2024}
208
  }
209
- ```
 
3
  library_name: liveportrait
4
  ---
5
 
 
 
6
  <div align='center'>
7
  <a href='https://github.com/cleardusk' target='_blank'><strong>Jianzhu Guo</strong></a><sup> 1†</sup>&emsp;
8
  <a href='https://github.com/KwaiVGI' target='_blank'><strong>Dingyun Zhang</strong></a><sup> 1,2</sup>&emsp;
 
19
  <div align='center'>
20
  <sup>1 </sup>Kuaishou Technology&emsp; <sup>2 </sup>University of Science and Technology of China&emsp; <sup>3 </sup>Fudan University&emsp;
21
  </div>
22
+ <div align='center'>
23
+ <small><sup>†</sup> Corresponding author</small>
24
+ </div>
25
 
26
  <br>
27
+ <div align="center">
28
  <!-- <a href='LICENSE'><img src='https://img.shields.io/badge/license-MIT-yellow'></a> -->
29
  <a href='https://arxiv.org/pdf/2407.03168'><img src='https://img.shields.io/badge/arXiv-LivePortrait-red'></a>
30
  <a href='https://liveportrait.github.io'><img src='https://img.shields.io/badge/Project-LivePortrait-green'></a>
31
  <a href='https://huggingface.co/spaces/KwaiVGI/liveportrait'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
32
+ <a href="https://github.com/KwaiVGI/LivePortrait"><img src="https://img.shields.io/github/stars/KwaiVGI/LivePortrait"></a>
33
  </div>
34
  <br>
35
 
 
42
 
43
 
44
  ## πŸ”₯ Updates
45
+ - **`2024/07/19`**: ✨ We support 🎞️ portrait video editing (aka v2v)! More to see [here](assets/docs/changelog/2024-07-19.md).
46
+ - **`2024/07/17`**: 🍎 We support macOS with Apple Silicon, modified from [jeethu](https://github.com/jeethu)'s PR [#143](https://github.com/KwaiVGI/LivePortrait/pull/143).
47
+ - **`2024/07/10`**: πŸ’ͺ We support audio and video concatenating, driving video auto-cropping, and template making to protect privacy. More to see [here](assets/docs/changelog/2024-07-10.md).
48
  - **`2024/07/09`**: πŸ€— We released the [HuggingFace Space](https://huggingface.co/spaces/KwaiVGI/liveportrait), thanks to the HF team and [Gradio](https://github.com/gradio-app/gradio)!
49
  - **`2024/07/04`**: 😊 We released the initial version of the inference code and models. Continuous updates, stay tuned!
50
  - **`2024/07/04`**: πŸ”₯ We released the [homepage](https://liveportrait.github.io) and technical report on [arXiv](https://arxiv.org/pdf/2407.03168).
51
 
52
 
53
+ ## Introduction πŸ“–
 
54
  This repo, named **LivePortrait**, contains the official PyTorch implementation of our paper [LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control](https://arxiv.org/pdf/2407.03168).
55
  We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) πŸ’–.
56
 
57
+ ## Getting Started 🏁
58
  ### 1. Clone the code and prepare the environment
59
  ```bash
60
  git clone https://github.com/KwaiVGI/LivePortrait
61
  cd LivePortrait
62
 
63
  # create env using conda
64
+ conda create -n LivePortrait python==3.9
65
  conda activate LivePortrait
66
+
67
  # install dependencies with pip
68
+ # for Linux and Windows users
69
  pip install -r requirements.txt
70
+ # for macOS with Apple Silicon users
71
+ pip install -r requirements_macOS.txt
72
  ```
73
 
74
+ **Note:** make sure your system has [FFmpeg](https://ffmpeg.org/download.html) installed, including both `ffmpeg` and `ffprobe`!
75
 
76
  ### 2. Download pretrained weights
77
 
78
  The easiest way to download the pretrained weights is from HuggingFace:
79
  ```bash
80
+ # first, ensure git-lfs is installed, see: https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage
81
+ git lfs install
82
+ # clone and move the weights
83
+ git clone https://huggingface.co/KwaiVGI/LivePortrait temp_pretrained_weights
84
+ mv temp_pretrained_weights/* pretrained_weights/
85
+ rm -rf temp_pretrained_weights
86
  ```
87
 
88
  Alternatively, you can download all pretrained weights from [Google Drive](https://drive.google.com/drive/folders/1UtKgzKjFAOmZkhNK-OYT0caJ_w2XAnib) or [Baidu Yun](https://pan.baidu.com/s/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn). Unzip and place them in `./pretrained_weights`.
 
110
 
111
  #### Fast hands-on
112
  ```bash
113
+ # For Linux and Windows
114
  python inference.py
115
+
116
+ # For macOS with Apple Silicon, Intel not supported, this maybe 20x slower than RTX 4090
117
+ PYTORCH_ENABLE_MPS_FALLBACK=1 python inference.py
118
  ```
119
 
120
+ If the script runs successfully, you will get an output mp4 file named `animations/s6--d0_concat.mp4`. This file includes the following results: driving video, input image or video, and generated result.
121
 
122
  <p align="center">
123
  <img src="./docs/inference.gif" alt="image">
 
126
  Or, you can change the input by specifying the `-s` and `-d` arguments:
127
 
128
  ```bash
129
+ # source input is an image
130
  python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4
131
 
132
+ # source input is a video ✨
133
+ python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d0.mp4
134
 
135
  # more options to see
136
  python inference.py -h
137
  ```
138
 
139
+ #### Driving video auto-cropping πŸ“’πŸ“’πŸ“’
140
+ To use your own driving video, we **recommend**: ⬇️
 
141
  - Crop it to a **1:1** aspect ratio (e.g., 512x512 or 256x256 pixels), or enable auto-cropping by `--flag_crop_driving_video`.
142
  - Focus on the head area, similar to the example videos.
143
  - Minimize shoulder movement.
 
148
  python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video
149
  ```
150
 
151
+ If you find the results of auto-cropping is not well, you can modify the `--scale_crop_driving_video`, `--vy_ratio_crop_driving_video` options to adjust the scale and offset, or do it manually.
152
 
153
  #### Motion template making
154
  You can also use the auto-generated motion template files ending with `.pkl` to speed up inference, and **protect privacy**, such as:
155
  ```bash
156
+ python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl # portrait animation
157
+ python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d5.pkl # portrait video editing
158
  ```
159
 
 
 
160
  ### 4. Gradio interface πŸ€—
161
 
162
  We also provide a Gradio <a href='https://github.com/gradio-app/gradio'><img src='https://img.shields.io/github/stars/gradio-app/gradio'></a> interface for a better experience, just run by:
163
 
164
  ```bash
165
+ # For Linux and Windows users (and macOS with Intel??)
166
  python app.py
167
+
168
+ # For macOS with Apple Silicon users, Intel not supported, this maybe 20x slower than RTX 4090
169
+ PYTORCH_ENABLE_MPS_FALLBACK=1 python app.py
170
  ```
171
 
172
  You can specify the `--server_port`, `--share`, `--server_name` arguments to satisfy your needs!
 
176
  # enable torch.compile for faster inference
177
  python app.py --flag_do_torch_compile
178
  ```
179
+ **Note**: This method is not supported on Windows and macOS.
180
 
181
  **Or, try it out effortlessly on [HuggingFace](https://huggingface.co/spaces/KwaiVGI/LivePortrait) πŸ€—**
182
 
 
184
  We have also provided a script to evaluate the inference speed of each module:
185
 
186
  ```bash
187
+ # For NVIDIA GPU
188
  python speed.py
189
  ```
190
 
 
206
 
207
  - [ComfyUI-LivePortraitKJ](https://github.com/kijai/ComfyUI-LivePortraitKJ) by [@kijai](https://github.com/kijai)
208
  - [comfyui-liveportrait](https://github.com/shadowcz007/comfyui-liveportrait) by [@shadowcz007](https://github.com/shadowcz007)
209
+ - [LivePortrait In ComfyUI](https://www.youtube.com/watch?v=aFcS31OWMjE) by [@Benji](https://www.youtube.com/@TheFutureThinker)
210
  - [LivePortrait hands-on tutorial](https://www.youtube.com/watch?v=uyjSTAOY7yI) by [@AI Search](https://www.youtube.com/@theAIsearch)
211
  - [ComfyUI tutorial](https://www.youtube.com/watch?v=8-IcDDmiUMM) by [@Sebastian Kamph](https://www.youtube.com/@sebastiankamph)
 
212
  - [Replicate Playground](https://replicate.com/fofr/live-portrait) and [cog-comfyui](https://github.com/fofr/cog-comfyui) by [@fofr](https://github.com/fofr)
213
 
214
  And many more amazing contributions from our community!
215
 
216
+ ## Acknowledgements πŸ’
217
  We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface) repositories, for their open research and contributions.
218
 
219
  ## Citation πŸ’–
 
225
  journal = {arXiv preprint arXiv:2407.03168},
226
  year = {2024}
227
  }
228
+ ```