922CA's picture
Update README.md
b7c077c
|
raw
history blame
947 Bytes
metadata
license: openrail

Experimental Tagalog loras: safe or accurate outputs not guaranteed (not for production use)!

lt2_08162023

  • Fine tuned on a small dataset of 14 items, manually edited
  • 1 epoch (barely any noticable results)
  • From chat LLaMA-2-7b
  • Lora of chat-tagalog v0.1

lt2_08162023a

  • Fine tuned on a small dataset of 14 items, manually edited
  • 20 epochs (more observable effects)
  • From chat LLaMA-2-7b
  • Lora of chat-tagalog v0.1a

lt2_08162023b

  • Fine tuned on a small dataset of 14 items, manually edited
  • 10 epochs
  • From chat LLaMA-2-7b
  • Lora of chat-tagalog v0.1b

lt2_08162023c

  • Fine tuned on a small dataset of 14 items, manually edited
  • 50 epochs (overfitted)
  • From chat LLaMA-2-7b
  • Lora of chat-tagalog v0.1c

lt2_08162023d

  • Fine tuned on a small dataset of 14 items, manually edited
  • 30 epochs (v0.1a further trained and cut-off before overfit)
  • From chat LLaMA-2-7b
  • Lora of chat-tagalog v0.1d