This repository contains FP16 logits produced via the llama.cpp perplexity
with wikitext-2-raw/wiki.test.raw
.
By using the logits as input the KL divergence for a quantized model can be calculated without the need to run the model at FP16.
Important: The logits I previously uploaded for LLaMA 3 Instruct 70b FP16 may have been affected by hardware instability issues and any conclusions drawn from them may be incorrect. I therefore deleted the file.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.