cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF

Quantized GGUF model files for TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k from cnmoro

Original Model Card:

Finetuned version of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T, on a Portuguese instruct dataset, using axolotl.

v0, v1 and v2 were finetuned for the default 2048 context length. For this v3, I have used the existing v2 and finetuned the model on a 8k context length dataset. It works fairly well, but it's reasoning capabilities are not so strong. It works well for basic RAG / question answering on retrieved content.

Prompt format:

f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:\n"

Downloads last month
23
GGUF
Model size
1.1B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF

Collection including afrideva/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-GGUF