--- base_model: allenai/OLMo-7B-0424-hf license: apache-2.0 language: - en pipeline_tag: text2text-generation tags: - olmo quantized_by: robolamp --- Quantized versions of https://huggingface.co/allenai/OLMo-7B-0424-hf **NB**: *Q8_K* is not supported by default llama.cpp, use *Q8_0* instead. bits per weight vs size plot: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63af30a6874f03948c8e3d82/-bUdp3RGswnueD3jFiLLO.png) TODO: readme