Text-to-Video
GGUF
English
hyvid
lora
gguf-comfy
gguf-node
hyvid / README.md
calcuis's picture
Update README.md
35d1359 verified
|
raw
history blame
1.51 kB
metadata
license: other
license_name: tencent-hunyuan-community
license_link: LICENSE
datasets:
  - trojblue/test-HunyuanVideo-anime-images
language:
  - en
base_model:
  - tencent/HunyuanVideo
pipeline_tag: text-to-video
tags:
  - hyvid
  - lora
  - gguf-comfy

GGUF quantized hyvid with lora anime adapter

screenshot

Setup (once)

  • drag hyvid_lora_adapter.safetensors to > ./ComfyUI/models/loras
  • drag hunyuan-video-t2v-720p-q4_0.gguf to > ./ComfyUI/models/diffusion_models
  • drag clip_l.safetensors to > ./ComfyUI/models/text_encoders
  • drag llava_llama3_fp8_scaled.safetensors to > ./ComfyUI/models/text_encoders
  • drag hunyuan_video_vae_bf16.safetensors to > ./ComfyUI/models/vae

Run it straight (no installation needed way)

  • run the .bat file in the main directory (assuming you are using the gguf-comfy pack below)
  • drag the workflow json file (below) to > your browser

Workflows

  • example workflow for gguf (prepare two quantized files to switch while oom occurs)
  • example workflow for safetensors (fp8_scaled version is recommended)

References