922CA's picture
Update README.md
d23a266
|
raw
history blame
1.41 kB
metadata
license: openrail

Experimental Tagalog loras: safe or accurate outputs not guaranteed (not for production use)!

lt2_08162023

  • Fine tuned on a small dataset of 14 items, manually edited
  • 1 epoch (barely any noticable results)
  • From chat LLaMA-2-7b
  • Lora of chat-tagalog v0.1

lt2_08162023a

  • Fine tuned on a small dataset of 14 items, manually edited
  • 20 epochs (more observable effects)
  • From chat LLaMA-2-7b
  • Lora of chat-tagalog v0.1a

lt2_08162023b

  • Fine tuned on a small dataset of 14 items, manually edited
  • 10 epochs
  • From chat LLaMA-2-7b
  • Lora of chat-tagalog v0.1b

lt2_08162023c

  • Fine tuned on a small dataset of 14 items, manually edited
  • 50 epochs (overfitted)
  • From chat LLaMA-2-7b
  • Lora of chat-tagalog v0.1c

lt2_08162023d

  • Fine tuned on a small dataset of 14 items, manually edited
  • 30 epochs (v0.1a further trained and cut-off before overfit)
  • From chat LLaMA-2-7b
  • Lora of chat-tagalog v0.1d

llama-2-7b-tagalog-v0.2 loras (08/26/2023)

  • Fine tuned on dataset of ~10k items (mixed)
  • 2/2a/2b fine-tuned for 1/2/3 epochs
  • From chat LLaMA-2-7b
  • Future attempt planned with cleaner chat/dialogue data

hopia-3b-v0.1 (08/26/2023)

  • Fine tuned on a small dataset of 14 items, manually edited
  • 20 epochs
  • From Open LLaMA 3b