--- license: apache-2.0 language: - en pipeline_tag: text-to-image tags: - koji - gguf-node widget: - text: "masterpiece, best quality, 1girl, yellow eyes, medium hair, stairs, cherry blossoms, temple, fox girl, detached sleeves, animal ears, happy, arms behind back, tail," parameters: negative_prompt: "(worst quality, low quality:1.4)," output: url: samples\ComfyUI_00001_.png - text: "masterpiece, best quality, 1girl, architecture, garden, city, looking at viewer, upper body," parameters: negative_prompt: "(worst quality, low quality:1.4)," output: url: samples\ComfyUI_00002_.png --- # **gguf quantized version of koji (mini test pack)** ### **setup (in general)** - drag gguf file(s) to diffusion_models folder (./ComfyUI/models/diffusion_models) - drag l-clip, to text_encoders folder (./ComfyUI/models/text_encoders) - drag vae decoder to vae folder (./ComfyUI/models/vae) ### **run it straight (no installation needed way)** - get the comfy pack with the new gguf-node [here](https://github.com/calcuis/gguf/releases) - run the .bat file in the main directory ### **workflow** - drag any workflow json file to the activated browser; or - drag any generated output file (i.e., picture, video, etc.; which contains the workflow metadata) to the activated browser ### **review** - mini model based on sd1 - small size; super fast; except quality issue, which is a good model for testing the node ### **reference** - creator [ikena](https://civitai.com/user/Ikena) - comfyui [comfyanonymous](https://github.com/comfyanonymous/ComfyUI) - gguf-node ([pypi](https://pypi.org/project/gguf-node)|[repo](https://github.com/calcuis/gguf)|[pack](https://github.com/calcuis/gguf/releases))