Ah shıt, here we go again. another new model
Hmm, looks like it comes with architectural/inference changes too, so it doesn't work in base ComfyUI yet. I'll redo the quants once support is added to avoid confusing people as to why it doesn't work.
I thought they had improved it. @city96
Native support has been added now for the fixed model.
The base model got updated, so as the inference code.
The gguf models will need to get updated as well.
Well, new files should be up, assuming the auto conversation didn't fail. Old ones are on this branch if anyone wants to A/B compare.
@sunnyboxs The original model didn't work with Q3_K_S at all, but the updated one seems to handle it noticeably better, so I've added them. I'd probably try to go for Q4_K_S or above if you can, though. These models should be heavily compute bound so even with lowvram the speed loss shouldn't be super bad.
Hmm, looks like it comes with architectural/inference changes too, so it doesn't work in base ComfyUI yet. I'll redo the quants once support is added to avoid confusing people as to why it doesn't work.
Is this link currently the gguf quantization of the v1 model?
https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/#v1-concat
So does that mean there is no gguf quantization for the v2 model yet?
https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/#v2-replace
@makisekurisu-jp This repo has both the "v1" and "v2" model files. The v2 ones are the ones on the "main" branch, and the v1 ones you can find on the "original" branch (selected via dropdown when you're on the "Files and versions" tab)