A newer version of this model is available: ussipan/Llama-3.2-SipanGPT-v0.5-GGUF

SipánGPT 0.2 Llama 3.2 1B GGUF

  • Modelo pre-entrenado para responder preguntas de la Universidad Señor de Sipán de Lambayeque, Perú.
  • Pre-trained model to answer questions from the Señor de Sipán University of Lambayeque, Peru.

Testing the model

image/png

  • Debido a la cantidad de conversaciones con las que fue entrenado (5400 conversaciones), el modelo genera bastantes alucinaciones.
  • Due to the number of conversations with which it was trained (5400 conversations), the model generates quite a few hallucinations.

Uploaded model

  • Developed by: jhangmez
  • License: apache-2.0
  • Finetuned from model : unsloth/Meta-Llama-3.2-1B-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.


SipánGPT 0.2 Llama 3.2 1B GGUF

Hecho con ❤️ por Jhan Gómez P.
Downloads last month
102
GGUF
Model size
1.24B params
Architecture
llama

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train ussipan/SipanGPT-0.2-Llama-3.2-1B-GGUF