BagelMIsteryTour-v2-8x7B 3.5bpw

Imatrix GGUF quant of ycros/BagelMIsteryTour-v2-8x7B

Other quants:

EXL2: 5bpw, 3.5bpw

GGUF: IQ3_XXS, IQ2_XS, IQ2_XXS

Prompt format: Alpaca

It is noted to also work with mistral

Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Input:
{input}
### Response:

Contact

Kooten on discord

ko-fi.com/kooten if you would like to support me

Downloads last month
6
GGUF
Model size
46.7B params
Architecture
llama

2-bit

3-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Kooten/BagelMIsteryTour-v2-8x7B-Imatrix-GGUF