LPX55 commited on
Commit
fa56492
·
verified ·
1 Parent(s): 39f23c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -19,6 +19,10 @@ HunyuanVideo Keyframe Control Lora is an adapter for HunyuanVideo T2V model for
19
  * We apply Low-Rank Adaptation (LoRA) across all linear layers and the convolutional input layer. This approach facilitates efficient fine-tuning by introducing low-rank matrices that approximate the weight updates, thereby preserving the base model's foundational capabilities while reducing the number of trainable parameters.
20
  * The model is conditioned on user-defined keyframes, allowing precise control over the generated video's start and end frames. This conditioning ensures that the generated content aligns seamlessly with the specified keyframes, enhancing the coherence and narrative flow of the video.​
21
 
 
 
 
 
22
  | Image 1 | Image 2 | Generated Video |
23
  |---------|---------|-----------------|
24
  | ![Image 1](https://content.dashtoon.ai/stability-images/41aeca63-064a-4003-8c8b-bfe2cc80d275.png) | ![Image 2](https://content.dashtoon.ai/stability-images/28956177-3455-4b56-bb6c-73eacef323ca.png) | <video controls autoplay src="https://content.dashtoon.ai/stability-images/14b7dd1a-1f46-4c4c-b4ec-9d0f948712af.mp4"></video> |
@@ -26,9 +30,6 @@ HunyuanVideo Keyframe Control Lora is an adapter for HunyuanVideo T2V model for
26
  | ![Image 1](https://content.dashtoon.ai/stability-images/5298cf0c-0955-4568-935a-2fb66045f21d.png) | ![Image 2](https://content.dashtoon.ai/stability-images/722a4ea7-7092-4323-8e83-3f627e8fd7f8.png) | <video controls autoplay src="https://content.dashtoon.ai/stability-images/0cb84780-4fdf-4ecc-ab48-12e7e1055a39.mp4"></video> |
27
  | ![Image 1](https://content.dashtoon.ai/stability-images/69d9a49f-95c0-4e85-bd49-14a039373c8b.png) | ![Image 2](https://content.dashtoon.ai/stability-images/0cef7fa9-e15a-48ec-9bd3-c61921181802.png) | <video controls autoplay src="https://content.dashtoon.ai/stability-images/ce12156f-0ac2-4d16-b489-37e85c61b5b2.mp4"></video> |
28
 
29
- ## Code:
30
- The tranining code can be found [here](https://github.com/dashtoon/hunyuan-video-keyframe-control-lora).
31
-
32
  ## Recommended Settings
33
  1. The model works best on human subjects. Single subject images work slightly better.
34
  2. It is recommended to use the following image generation resolutions `720x1280`, `544x960`, `1280x720`, `960x544`.
@@ -39,6 +40,5 @@ The tranining code can be found [here](https://github.com/dashtoon/hunyuan-video
39
  ## Diffusers
40
  HunyuanVideo Keyframe Control Lora can be used directly from Diffusers. Install the latest version of Diffusers.
41
 
42
-
43
  ## Inference
44
  While the included `inference.py` script can be used to run inference. We would encourage folks to visit out [github repo](https://github.com/dashtoon/hunyuan-video-keyframe-control-lora/blob/main/hv_control_lora_inference.py) which contains a much optimized version of this inference script.
 
19
  * We apply Low-Rank Adaptation (LoRA) across all linear layers and the convolutional input layer. This approach facilitates efficient fine-tuning by introducing low-rank matrices that approximate the weight updates, thereby preserving the base model's foundational capabilities while reducing the number of trainable parameters.
20
  * The model is conditioned on user-defined keyframes, allowing precise control over the generated video's start and end frames. This conditioning ensures that the generated content aligns seamlessly with the specified keyframes, enhancing the coherence and narrative flow of the video.​
21
 
22
+
23
+ ## Examples
24
+ **Provided by the team at @dashtoon. Refreshingly simple.**
25
+
26
  | Image 1 | Image 2 | Generated Video |
27
  |---------|---------|-----------------|
28
  | ![Image 1](https://content.dashtoon.ai/stability-images/41aeca63-064a-4003-8c8b-bfe2cc80d275.png) | ![Image 2](https://content.dashtoon.ai/stability-images/28956177-3455-4b56-bb6c-73eacef323ca.png) | <video controls autoplay src="https://content.dashtoon.ai/stability-images/14b7dd1a-1f46-4c4c-b4ec-9d0f948712af.mp4"></video> |
 
30
  | ![Image 1](https://content.dashtoon.ai/stability-images/5298cf0c-0955-4568-935a-2fb66045f21d.png) | ![Image 2](https://content.dashtoon.ai/stability-images/722a4ea7-7092-4323-8e83-3f627e8fd7f8.png) | <video controls autoplay src="https://content.dashtoon.ai/stability-images/0cb84780-4fdf-4ecc-ab48-12e7e1055a39.mp4"></video> |
31
  | ![Image 1](https://content.dashtoon.ai/stability-images/69d9a49f-95c0-4e85-bd49-14a039373c8b.png) | ![Image 2](https://content.dashtoon.ai/stability-images/0cef7fa9-e15a-48ec-9bd3-c61921181802.png) | <video controls autoplay src="https://content.dashtoon.ai/stability-images/ce12156f-0ac2-4d16-b489-37e85c61b5b2.mp4"></video> |
32
 
 
 
 
33
  ## Recommended Settings
34
  1. The model works best on human subjects. Single subject images work slightly better.
35
  2. It is recommended to use the following image generation resolutions `720x1280`, `544x960`, `1280x720`, `960x544`.
 
40
  ## Diffusers
41
  HunyuanVideo Keyframe Control Lora can be used directly from Diffusers. Install the latest version of Diffusers.
42
 
 
43
  ## Inference
44
  While the included `inference.py` script can be used to run inference. We would encourage folks to visit out [github repo](https://github.com/dashtoon/hunyuan-video-keyframe-control-lora/blob/main/hv_control_lora_inference.py) which contains a much optimized version of this inference script.