benjamin-paine
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,71 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
This repository contains a pruned and partially reorganized version of [CHAMP](https://fudan-generative-vision.github.io/champ/#/).
|
5 |
+
|
6 |
+
```
|
7 |
+
@misc{zhu2024champ,
|
8 |
+
title={Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance},
|
9 |
+
author={Shenhao Zhu and Junming Leo Chen and Zuozhuo Dai and Yinghui Xu and Xun Cao and Yao Yao and Hao Zhu and Siyu Zhu},
|
10 |
+
year={2024},
|
11 |
+
eprint={2403.14781},
|
12 |
+
archivePrefix={arXiv},
|
13 |
+
primaryClass={cs.CV}
|
14 |
+
}
|
15 |
+
```
|
16 |
+
|
17 |
+
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/wZku1I_4L4VwWeXXKgXqb.mp4"></video>
|
18 |
+
|
19 |
+
Video credit: [Polina Tankilevitch, Pexels](https://www.pexels.com/video/a-young-woman-dancing-hip-hop-3873100/)
|
20 |
+
|
21 |
+
Image credit: [Andrea Piacquadio, Pexels](https://www.pexels.com/photo/man-in-black-jacket-wearing-black-headphones-3831645/)
|
22 |
+
|
23 |
+
# Usage
|
24 |
+
|
25 |
+
First, install the CHAMP package into your python environment. If you're creating a new environment for CHAMP, be sure you also specify the version of torch you want with CUDA support, or else this will try to run only on CPU.
|
26 |
+
|
27 |
+
```sh
|
28 |
+
pip install git+https://github.com/painebenjamin/champ.git
|
29 |
+
```
|
30 |
+
|
31 |
+
Now, you can create the pipeline, automatically pulling the weights from this repository, either as individual models:
|
32 |
+
|
33 |
+
```py
|
34 |
+
from champ import CHAMPPipeline
|
35 |
+
pipeline = CHAMPPipeline.from_pretrained(
|
36 |
+
"benjamin-paine/champ",
|
37 |
+
torch_dtype=torch.float16,
|
38 |
+
variant="fp16",
|
39 |
+
device="cuda",
|
40 |
+
).to("cuda", dtype=torch.float16)
|
41 |
+
```
|
42 |
+
|
43 |
+
Or, as a single file:
|
44 |
+
|
45 |
+
```py
|
46 |
+
pipeline = CHAMPPipeline.from_pretrained(
|
47 |
+
"benjamin-paine/champ",
|
48 |
+
torch_dtype=torch.float16,
|
49 |
+
variant="fp16",
|
50 |
+
device="cuda",
|
51 |
+
).to("cuda", dtype=torch.float16)
|
52 |
+
```
|
53 |
+
|
54 |
+
Follow this format for execution:
|
55 |
+
|
56 |
+
```
|
57 |
+
result = pipeline(
|
58 |
+
reference: PIL.Image.Image,
|
59 |
+
guidance: Dict[str, List[PIL.Image.Image]],
|
60 |
+
width: int,
|
61 |
+
height: int,
|
62 |
+
video_length: int,
|
63 |
+
num_inference_steps: int,
|
64 |
+
guidance_scale: float
|
65 |
+
).videos
|
66 |
+
# Result is a list of PIL Images
|
67 |
+
```
|
68 |
+
|
69 |
+
Starting values for `num_inference_steps` and `guidance_scale` are `20` and `3.5`, respectively.
|
70 |
+
|
71 |
+
Guidance keys include `depth`, `normal`, `dwpose` and `semantic_map` (densepose.) This guide does not provide details on how to obtain those samples, but examples are available in [the git repository.](https://github.com/painebenjamin/champ/tree/master/example)
|