calcuis commited on
Commit
36d07b9
·
verified ·
1 Parent(s): 27af3da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -70,10 +70,10 @@ widget:
70
  ### **review**
71
  - `q2_k` gguf is super fast but not usable; keep it for testing only
72
  - `0.9.1_fp8_e4m3fn` and `0.9.1-vae_fp8_e4m3fn` are **not working**; but keep them here, see who can figure out how to make them work
73
- - by the way, 0.9_fp8_e4m3fn and 0.9-vae_fp8_e4m3fn are working pretty good
74
  - mix-and-match possible; you could mix up using the vae(s) available here with different model file(s)
75
  - **gguf-node** is available (see details [here](https://github.com/calcuis/gguf)) for running the new features (the point below might not relate to the model)
76
- - you are able to make your own fp8_e4m3fn scaled safetensors and/or convert it to gguf with the new node via comfyui
77
 
78
  ### **reference**
79
  - base model from [lightricks](https://huggingface.co/Lightricks/LTX-Video)
 
70
  ### **review**
71
  - `q2_k` gguf is super fast but not usable; keep it for testing only
72
  - `0.9.1_fp8_e4m3fn` and `0.9.1-vae_fp8_e4m3fn` are **not working**; but keep them here, see who can figure out how to make them work
73
+ - by the way, `0.9_fp8_e4m3fn` and `0.9-vae_fp8_e4m3fn` are working pretty good
74
  - mix-and-match possible; you could mix up using the vae(s) available here with different model file(s)
75
  - **gguf-node** is available (see details [here](https://github.com/calcuis/gguf)) for running the new features (the point below might not relate to the model)
76
+ - you are able to make your own `fp8_e4m3fn` scaled safetensors and/or convert it to **gguf** with the new node via comfyui
77
 
78
  ### **reference**
79
  - base model from [lightricks](https://huggingface.co/Lightricks/LTX-Video)