--- license: cc-by-nc-4.0 language: - en --- # WinterGoddess-1.4x-70B-L2 IQ2-GGUF ## Description IQ2-GGUF quants of [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2) Unlike regular GGUF quants this uses important matrix similar to Quip# to keep the quant from degrading too much even at 2bpw allowing you to run larger models on less powerful machines. ***NOTE:*** As of uploading these this llamacpp can run these quants but i am unsure what guis like oobabooga / koboldcpp can run them. [More info](https://github.com/ggerganov/llama.cpp/pull/4897) # Models Models: [IQ2-XS](https://huggingface.co/Kooten/WinterGoddess-1.4x-70B-L2-IQ2-GGUF/blob/main/WinterGoddess-1.4x-70B-L2-IQ2_XS.gguf), [IQ2-XXS](https://huggingface.co/Kooten/WinterGoddess-1.4x-70B-L2-IQ2-GGUF/blob/main/WinterGoddess-1.4x-70B-L2-IQ2_XXS.gguf) Regular GGUF Quants: [Here](https://huggingface.co/TheBloke/WinterGoddess-1.4x-70B-L2-GGUF) ## Prompt Format ### Alpaca: ``` ### Instruction: ### Response: ``` OR ``` ### Instruction: ### Input: ### Response: ``` ## Contact Kooten on discord