Llama-3-8b-tagalog-v1:

USAGE

This is meant to be mainly a chat model.

Use "Human" and "Assistant" and prompt with Tagalog:

"\nHuman: INPUT\nAssistant:"

HYPERPARAMS

  • Trained for 1 epochs
  • rank: 32
  • lora alpha: 32
  • lr: 2e-4
  • batch size: 2
  • grad steps: 4

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

WARNINGS AND DISCLAIMERS

Note that there is a chance that the model may switch back to English (albeit still understand Tagalog inputs) or output clunky results.

Finally, this model is not guaranteed to output aligned or safe outputs nor is it meant for production use - use at your own risk!

Downloads last month
9
GGUF
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for 922-Narra/Llama-3-8b-tagalog-v1

Finetuned
(2534)
this model
Quantizations
2 models

Dataset used to train 922-Narra/Llama-3-8b-tagalog-v1

Space using 922-Narra/Llama-3-8b-tagalog-v1 1