Qra is a series of LLMs adapted to the Polish language, resulting from a collaboration between the National Information Processing Institute (OPI) and Gdańsk University of Technology (PG).

Original base model can be found on HuggingFace here: https://huggingface.co/OPI-PG/Qra-1b

This GGUF file was quantized using Colab Notebook: https://colab.research.google.com/github/adithya-s-k/LLM-Alchemy-Chamber/blob/main/Quantization/GGUF_Quantization.ipynb

This is my first convertion of model. I don't know if whole process was correct (I mean model/gguf file gives strange answers, maybe I'm configuring it or setting not properly), but I'm fresh learner.

Pierwsze boty za płoty, jak to mówią.

Gratuluję twórcom, miejmy nadzieję, że będzie to Qra znosząca złote jajka.

Pozdro!

Downloads last month
19
GGUF
Model size
1.1B params
Architecture
llama

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Fibogacci/Qra-1B-GGUF

Base model

OPI-PG/Qra-1b
Quantized
(4)
this model