File size: 1,115 Bytes
63b4b97 b4e0c02 63b4b97 b4e0c02 63b4b97 b4e0c02 63b4b97 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
# How to make
1. pretrained model: [epiCRealism](https://civitai.com/models/25694?modelVersionId=134065) + [hyper CFG lora 12steps](https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-SD15-12steps-CFG-lora.safetensors)
-> merge with lora weight 0.3
2. model merged at step1 + lora model: [AnimateLCM_sd15_t2v_lora.safetensors](https://huggingface.co/wangfuyun/AnimateLCM/blob/main/AnimateLCM_sd15_t2v_lora.safetensors)-> merge with lora weight 0.8
```python
# Load the motion adapter
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-3", torch_dtype=torch.float16)
# load SD 1.5 based finetuned model
model_id = "/home/hyejin2/test/models/epiCRealism-hyper-LCM-8.safetensors"
pipe = AnimateDiffVideoToVideoPipeline.from_single_file(model_id, motion_adapter=adapter, torch_dtype=torch.float16)
pipe.save_pretrained("models/hello")
```
# How to use
```python
model_id = "jstep750/animatediff_v2v"
pipe = AnimateDiffVideoToVideoPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
# enable memory savings
pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()
``` |