Disclaimer:

The model is reproduced based on the paper VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models github and arXiv

The model itself is sourced from a community release.

It is intended only for experimental purposes.

Users are responsible for any consequences arising from the use of this model.

Note:

The PPL test results are for reference only and were collected using GPTQ testing script.

{
    "ctx_2048": {
        "wikitext2": 7.414072513580322
    },
    "ctx_4096": {
        "wikitext2": 6.940601348876953
    },
    "ctx_8192": {
        "wikitext2": 6.678436756134033
    }
}
Downloads last month
201
Safetensors
Model size
2.16B params
Tensor type
I32
·
BF16
·
I16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-65536-woft

Quantized
(301)
this model

Collection including VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-65536-woft