IMPORTANT !!

I strongly recommend using the DPO model instead, as it is optimized for better performance and efficiency. This model has been fine-tuned for improved results, making it the preferred choice.

Please refrain from using the SFT model unless you specifically need a base model to build upon. If you require a strong starting point for further fine-tuning, the SFT model can serve that purpose, but for general use, the DPO model is the better option.

Uploaded model

  • Developed by: WasamiKirua
  • License: apache-2.0
  • Finetuned from model : unsloth/meta-llama-3.1-8b-instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

FROM {FILE_LOCATION} TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>""" PARAMETER stop "<|start_header_id|>" PARAMETER stop "<|end_header_id|>" PARAMETER stop "<|eot_id|>" PARAMETER temperature 1.5 PARAMETER min_p 0.1

Downloads last month
42
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for WasamiKirua/llama-3.1-new-params-16bit

Finetunes
1 model